Test Report: Docker_Linux_containerd 14079

                    
                      bc7278193255a66f30064dc56185dbbc87656da8:2022-05-31:24200
                    
                

Test fail (16/265)

x
+
TestNetworkPlugins/group/calico/Start (536.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0531 17:44:25.073223    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m56.32472053s)

                                                
                                                
-- stdout --
	* [calico-20220531174030-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220531174030-6903 in cluster calico-20220531174030-6903
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:44:22.354146  191945 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:44:22.354260  191945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:44:22.354266  191945 out.go:309] Setting ErrFile to fd 2...
	I0531 17:44:22.354272  191945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:44:22.354417  191945 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:44:22.354807  191945 out.go:303] Setting JSON to false
	I0531 17:44:22.357221  191945 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5213,"bootTime":1654013849,"procs":1400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:44:22.357346  191945 start.go:125] virtualization: kvm guest
	I0531 17:44:22.360057  191945 out.go:177] * [calico-20220531174030-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:44:22.361667  191945 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:44:22.361625  191945 notify.go:193] Checking for updates...
	I0531 17:44:22.364476  191945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:44:22.365903  191945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:44:22.367368  191945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:44:22.368667  191945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:44:22.370346  191945 config.go:178] Loaded profile config "cert-expiration-20220531174046-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:44:22.370453  191945 config.go:178] Loaded profile config "cilium-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:44:22.370547  191945 config.go:178] Loaded profile config "kindnet-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:44:22.370606  191945 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:44:22.414347  191945 docker.go:137] docker version: linux-20.10.16
	I0531 17:44:22.414439  191945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:44:22.593205  191945 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:44:22.466366258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:44:22.593308  191945 docker.go:254] overlay module found
	I0531 17:44:22.596062  191945 out.go:177] * Using the docker driver based on user configuration
	I0531 17:44:22.597362  191945 start.go:284] selected driver: docker
	I0531 17:44:22.597373  191945 start.go:806] validating driver "docker" against <nil>
	I0531 17:44:22.597389  191945 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:44:22.598319  191945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:44:22.754722  191945 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:44:22.64527168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:44:22.754891  191945 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:44:22.755103  191945 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:44:22.757000  191945 out.go:177] * Using Docker driver with the root privilege
	I0531 17:44:22.758522  191945 cni.go:95] Creating CNI manager for "calico"
	I0531 17:44:22.758549  191945 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0531 17:44:22.758565  191945 start_flags.go:306] config:
	{Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:44:22.761373  191945 out.go:177] * Starting control plane node calico-20220531174030-6903 in cluster calico-20220531174030-6903
	I0531 17:44:22.762783  191945 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:44:22.764120  191945 out.go:177] * Pulling base image ...
	I0531 17:44:22.765614  191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:44:22.765656  191945 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 17:44:22.765669  191945 cache.go:57] Caching tarball of preloaded images
	I0531 17:44:22.765715  191945 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:44:22.765940  191945 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 17:44:22.765962  191945 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 17:44:22.766102  191945 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json ...
	I0531 17:44:22.766133  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json: {Name:mkf8a845c9f4ef689c7f45ebda102574a9d56868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:22.818206  191945 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:44:22.818242  191945 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:44:22.818258  191945 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:44:22.818302  191945 start.go:352] acquiring machines lock for calico-20220531174030-6903: {Name:mk35e713576d28740afd136b293c99fe6d1e5ac3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:44:22.818418  191945 start.go:356] acquired machines lock for "calico-20220531174030-6903" in 99.592µs
	I0531 17:44:22.818439  191945 start.go:91] Provisioning new machine with config: &{Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:44:22.818545  191945 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:44:22.822025  191945 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 17:44:22.822324  191945 start.go:165] libmachine.API.Create for "calico-20220531174030-6903" (driver="docker")
	I0531 17:44:22.822360  191945 client.go:168] LocalClient.Create starting
	I0531 17:44:22.822465  191945 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:44:22.822503  191945 main.go:134] libmachine: Decoding PEM data...
	I0531 17:44:22.822527  191945 main.go:134] libmachine: Parsing certificate...
	I0531 17:44:22.822611  191945 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:44:22.822636  191945 main.go:134] libmachine: Decoding PEM data...
	I0531 17:44:22.822654  191945 main.go:134] libmachine: Parsing certificate...
	I0531 17:44:22.823071  191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:44:22.870384  191945 cli_runner.go:211] docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:44:22.870462  191945 network_create.go:272] running [docker network inspect calico-20220531174030-6903] to gather additional debugging logs...
	I0531 17:44:22.870493  191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903
	W0531 17:44:22.903992  191945 cli_runner.go:211] docker network inspect calico-20220531174030-6903 returned with exit code 1
	I0531 17:44:22.904023  191945 network_create.go:275] error running [docker network inspect calico-20220531174030-6903]: docker network inspect calico-20220531174030-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220531174030-6903
	I0531 17:44:22.904040  191945 network_create.go:277] output of [docker network inspect calico-20220531174030-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220531174030-6903
	
	** /stderr **
	I0531 17:44:22.904084  191945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:44:22.963099  191945 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-35512cb7416d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:3b:63:ba}}
	I0531 17:44:22.963756  191945 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-1a877c65b8bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e6:36:74:9e}}
	I0531 17:44:22.964495  191945 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0005185b0] misses:0}
	I0531 17:44:22.964544  191945 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:44:22.964560  191945 network_create.go:115] attempt to create docker network calico-20220531174030-6903 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0531 17:44:22.964600  191945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531174030-6903
	I0531 17:44:23.046115  191945 network_create.go:99] docker network calico-20220531174030-6903 192.168.67.0/24 created
	I0531 17:44:23.046160  191945 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220531174030-6903" container
	I0531 17:44:23.046233  191945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:44:23.082768  191945 cli_runner.go:164] Run: docker volume create calico-20220531174030-6903 --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:44:23.115605  191945 oci.go:103] Successfully created a docker volume calico-20220531174030-6903
	I0531 17:44:23.115697  191945 cli_runner.go:164] Run: docker run --rm --name calico-20220531174030-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --entrypoint /usr/bin/test -v calico-20220531174030-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:44:23.770962  191945 oci.go:107] Successfully prepared a docker volume calico-20220531174030-6903
	I0531 17:44:23.771016  191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:44:23.771037  191945 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:44:23.771105  191945 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531174030-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:44:31.429701  191945 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531174030-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.658377533s)
	I0531 17:44:31.429751  191945 kic.go:188] duration metric: took 7.658710 seconds to extract preloaded images to volume
	W0531 17:44:31.460376  191945 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:44:31.460551  191945 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:44:31.580122  191945 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531174030-6903 --name calico-20220531174030-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531174030-6903 --network calico-20220531174030-6903 --ip 192.168.67.2 --volume calico-20220531174030-6903:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:44:32.000287  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Running}}
	I0531 17:44:32.033785  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:44:32.075810  191945 cli_runner.go:164] Run: docker exec calico-20220531174030-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:44:32.176825  191945 oci.go:247] the created container "calico-20220531174030-6903" has a running status.
	I0531 17:44:32.176855  191945 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa...
	I0531 17:44:32.372256  191945 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:44:32.483001  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:44:32.530410  191945 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:44:32.530433  191945 kic_runner.go:114] Args: [docker exec --privileged calico-20220531174030-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:44:32.629823  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:44:32.665893  191945 machine.go:88] provisioning docker machine ...
	I0531 17:44:32.665938  191945 ubuntu.go:169] provisioning hostname "calico-20220531174030-6903"
	I0531 17:44:32.665986  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:32.694790  191945 main.go:134] libmachine: Using SSH client type: native
	I0531 17:44:32.694948  191945 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0531 17:44:32.694964  191945 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220531174030-6903 && echo "calico-20220531174030-6903" | sudo tee /etc/hostname
	I0531 17:44:32.829913  191945 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220531174030-6903
	
	I0531 17:44:32.829999  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:32.872617  191945 main.go:134] libmachine: Using SSH client type: native
	I0531 17:44:32.872781  191945 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0531 17:44:32.872803  191945 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220531174030-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220531174030-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220531174030-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:44:32.990834  191945 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:44:32.990867  191945 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:44:32.990893  191945 ubuntu.go:177] setting up certificates
	I0531 17:44:32.990903  191945 provision.go:83] configureAuth start
	I0531 17:44:32.990958  191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
	I0531 17:44:33.030021  191945 provision.go:138] copyHostCerts
	I0531 17:44:33.030089  191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:44:33.030098  191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:44:33.030158  191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:44:33.030258  191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:44:33.030267  191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:44:33.030299  191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:44:33.030402  191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:44:33.030410  191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:44:33.030446  191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:44:33.030515  191945 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.calico-20220531174030-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220531174030-6903]
	I0531 17:44:33.118795  191945 provision.go:172] copyRemoteCerts
	I0531 17:44:33.118849  191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:44:33.118877  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:33.150867  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:44:33.234680  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0531 17:44:33.253701  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 17:44:33.270858  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:44:33.287300  191945 provision.go:86] duration metric: configureAuth took 296.381936ms
	I0531 17:44:33.287324  191945 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:44:33.287477  191945 config.go:178] Loaded profile config "calico-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:44:33.287493  191945 machine.go:91] provisioned docker machine in 621.576994ms
	I0531 17:44:33.287500  191945 client.go:171] LocalClient.Create took 10.46512974s
	I0531 17:44:33.287525  191945 start.go:173] duration metric: libmachine.API.Create for "calico-20220531174030-6903" took 10.465198487s
	I0531 17:44:33.287538  191945 start.go:306] post-start starting for "calico-20220531174030-6903" (driver="docker")
	I0531 17:44:33.287545  191945 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:44:33.287588  191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:44:33.287620  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:33.320073  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:44:33.402855  191945 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:44:33.406367  191945 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:44:33.406394  191945 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:44:33.406404  191945 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:44:33.406410  191945 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:44:33.406421  191945 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:44:33.406465  191945 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:44:33.406550  191945 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:44:33.406654  191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:44:33.414799  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:44:33.436598  191945 start.go:309] post-start completed in 149.043256ms
	I0531 17:44:33.436986  191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
	I0531 17:44:33.470983  191945 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json ...
	I0531 17:44:33.471296  191945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:44:33.471340  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:33.500091  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:44:33.587261  191945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:44:33.590846  191945 start.go:134] duration metric: createHost completed in 10.772290493s
	I0531 17:44:33.590868  191945 start.go:81] releasing machines lock for "calico-20220531174030-6903", held for 10.772440336s
	I0531 17:44:33.590940  191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
	I0531 17:44:33.626802  191945 ssh_runner.go:195] Run: systemctl --version
	I0531 17:44:33.626849  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:33.626912  191945 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:44:33.626975  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:44:33.665227  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:44:33.665649  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:44:33.760669  191945 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:44:33.770346  191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:44:33.779045  191945 docker.go:187] disabling docker service ...
	I0531 17:44:33.779093  191945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:44:33.794960  191945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:44:33.803534  191945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:44:33.891747  191945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:44:33.970396  191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:44:33.979700  191945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:44:33.992273  191945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:44:34.004857  191945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:44:34.010770  191945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:44:34.016785  191945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:44:34.089901  191945 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:44:34.151540  191945 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:44:34.151603  191945 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:44:34.155023  191945 start.go:468] Will wait 60s for crictl version
	I0531 17:44:34.155086  191945 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:44:34.180956  191945 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T17:44:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 17:44:45.230326  191945 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:44:45.253990  191945 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:44:45.254056  191945 ssh_runner.go:195] Run: containerd --version
	I0531 17:44:45.284157  191945 ssh_runner.go:195] Run: containerd --version
	I0531 17:44:45.491293  191945 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:44:45.691954  191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:44:45.723579  191945 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 17:44:45.726893  191945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:44:45.773556  191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:44:45.773625  191945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:44:45.799858  191945 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:44:45.799882  191945 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:44:45.799933  191945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:44:45.823114  191945 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:44:45.823135  191945 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:44:45.823215  191945 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:44:45.845103  191945 cni.go:95] Creating CNI manager for "calico"
	I0531 17:44:45.845127  191945 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:44:45.845138  191945 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220531174030-6903 NodeName:calico-20220531174030-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:44:45.845281  191945 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220531174030-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:44:45.845355  191945 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220531174030-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0531 17:44:45.845395  191945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:44:45.851958  191945 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:44:45.852010  191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:44:45.858326  191945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0531 17:44:45.875725  191945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:44:45.888457  191945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2055 bytes)
	I0531 17:44:45.900030  191945 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:44:45.902650  191945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:44:45.969978  191945 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903 for IP: 192.168.67.2
	I0531 17:44:45.970099  191945 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:44:45.970154  191945 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:44:45.970221  191945 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key
	I0531 17:44:45.970240  191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt with IP's: []
	I0531 17:44:46.605212  191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt ...
	I0531 17:44:46.605248  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt: {Name:mkc5097615ef999d9450cf2656949863c65dc5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:46.605458  191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key ...
	I0531 17:44:46.605497  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key: {Name:mk53beb1200de52df78bb8197e9ae092f5d8a8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:46.605640  191945 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e
	I0531 17:44:46.605661  191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:44:46.880556  191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e ...
	I0531 17:44:46.880596  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e: {Name:mke02f6794e2b012b33c1991ccb19b8dd6fec7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:46.880804  191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e ...
	I0531 17:44:46.880828  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e: {Name:mk05d8a5f1afd92af8b07b6630ac9898a2b66750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:46.880957  191945 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt
	I0531 17:44:46.881025  191945 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key
	I0531 17:44:46.881075  191945 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key
	I0531 17:44:46.881090  191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt with IP's: []
	I0531 17:44:47.159721  191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt ...
	I0531 17:44:47.159746  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt: {Name:mk47149c8fd427bd098ee8c80bdf8489ada06105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:47.159905  191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key ...
	I0531 17:44:47.159917  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key: {Name:mkbec5b9b21d4f401dd90b7d689951c475a7e3af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:44:47.160068  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:44:47.160102  191945 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:44:47.160113  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:44:47.160138  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:44:47.160166  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:44:47.160188  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:44:47.160223  191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:44:47.160747  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:44:47.225589  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:44:47.246657  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:44:47.282322  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 17:44:47.299593  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:44:47.317655  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:44:47.335715  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:44:47.450769  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:44:47.569939  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:44:47.587183  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:44:47.603541  191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:44:47.619803  191945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:44:47.631602  191945 ssh_runner.go:195] Run: openssl version
	I0531 17:44:47.636108  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:44:47.642735  191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:44:47.645568  191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:44:47.645604  191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:44:47.650018  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:44:47.657148  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:44:47.663941  191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:44:47.666681  191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:44:47.666722  191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:44:47.671288  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:44:47.677962  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:44:47.684807  191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:44:47.687610  191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:44:47.687650  191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:44:47.692218  191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:44:47.699197  191945 kubeadm.go:395] StartCluster: {Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false}
	I0531 17:44:47.699283  191945 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:44:47.699312  191945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:44:47.723918  191945 cri.go:87] found id: ""
	I0531 17:44:47.723965  191945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:44:47.730813  191945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:44:47.738631  191945 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:44:47.738680  191945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:44:47.745010  191945 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:44:47.745044  191945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:44:48.069508  191945 out.go:204]   - Generating certificates and keys ...
	I0531 17:44:51.103858  191945 out.go:204]   - Booting up control plane ...
	I0531 17:45:03.153008  191945 out.go:204]   - Configuring RBAC rules ...
	I0531 17:45:03.567301  191945 cni.go:95] Creating CNI manager for "calico"
	I0531 17:45:03.569348  191945 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0531 17:45:03.570760  191945 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:45:03.570785  191945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0531 17:45:03.588499  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:45:05.707761  191945 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.119199621s)
	I0531 17:45:05.707831  191945 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:45:05.707916  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:05.707925  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=calico-20220531174030-6903 minikube.k8s.io/updated_at=2022_05_31T17_45_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:05.716640  191945 ops.go:34] apiserver oom_adj: -16
	I0531 17:45:05.810294  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:06.374615  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:06.874730  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:07.374505  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:07.874926  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:08.374114  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:08.874631  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:09.374861  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:09.874161  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:10.374874  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:10.875022  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:11.374069  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:11.874299  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:12.374731  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:12.874431  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:13.374193  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:13.874298  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:14.374091  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:14.874657  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:15.374294  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:15.874195  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:16.374091  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:16.875070  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:17.374644  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:17.874891  191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:45:17.945566  191945 kubeadm.go:1045] duration metric: took 12.237702399s to wait for elevateKubeSystemPrivileges.
	I0531 17:45:17.945598  191945 kubeadm.go:397] StartCluster complete in 30.246407438s
	I0531 17:45:17.945620  191945 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:45:17.945717  191945 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:45:17.946631  191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:45:18.462609  191945 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220531174030-6903" rescaled to 1
	I0531 17:45:18.462658  191945 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:45:18.462670  191945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:45:18.465244  191945 out.go:177] * Verifying Kubernetes components...
	I0531 17:45:18.462761  191945 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:45:18.462922  191945 config.go:178] Loaded profile config "calico-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:45:18.466777  191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:45:18.466801  191945 addons.go:65] Setting storage-provisioner=true in profile "calico-20220531174030-6903"
	I0531 17:45:18.466819  191945 addons.go:65] Setting default-storageclass=true in profile "calico-20220531174030-6903"
	I0531 17:45:18.466826  191945 addons.go:153] Setting addon storage-provisioner=true in "calico-20220531174030-6903"
	W0531 17:45:18.466833  191945 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:45:18.466839  191945 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220531174030-6903"
	I0531 17:45:18.466883  191945 host.go:66] Checking if "calico-20220531174030-6903" exists ...
	I0531 17:45:18.467269  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:45:18.467455  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:45:18.515420  191945 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:45:18.516906  191945 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:45:18.516928  191945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:45:18.516979  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:45:18.517318  191945 addons.go:153] Setting addon default-storageclass=true in "calico-20220531174030-6903"
	W0531 17:45:18.517338  191945 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:45:18.517365  191945 host.go:66] Checking if "calico-20220531174030-6903" exists ...
	I0531 17:45:18.517870  191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
	I0531 17:45:18.552388  191945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:45:18.553698  191945 node_ready.go:35] waiting up to 5m0s for node "calico-20220531174030-6903" to be "Ready" ...
	I0531 17:45:18.557538  191945 node_ready.go:49] node "calico-20220531174030-6903" has status "Ready":"True"
	I0531 17:45:18.557559  191945 node_ready.go:38] duration metric: took 3.832995ms waiting for node "calico-20220531174030-6903" to be "Ready" ...
	I0531 17:45:18.557569  191945 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 17:45:18.562448  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:45:18.568738  191945 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace to be "Ready" ...
	I0531 17:45:18.574270  191945 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:45:18.574289  191945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:45:18.574338  191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
	I0531 17:45:18.618103  191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
	I0531 17:45:18.731087  191945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:45:18.825046  191945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:45:19.810559  191945 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.258134911s)
	I0531 17:45:19.810622  191945 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 17:45:19.848626  191945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117501518s)
	I0531 17:45:19.848648  191945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023563342s)
	I0531 17:45:19.850718  191945 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 17:45:19.852200  191945 addons.go:417] enableAddons completed in 1.389440529s
	I0531 17:45:20.582901  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:22.583432  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:24.602429  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:27.084878  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:29.583726  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:31.583899  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:33.584750  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:36.106316  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:38.583210  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:41.082979  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:43.113413  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:45.582809  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:47.583911  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:49.587161  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:52.083635  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:54.083775  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:56.085548  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:45:58.582810  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:01.083232  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:03.582938  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:05.584129  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:08.083014  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:10.583133  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:12.583739  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:15.083603  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:17.583224  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:19.583253  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:22.082892  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:24.083116  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:26.083573  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:28.083613  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:30.582588  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:32.583008  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:34.583279  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:36.583522  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:38.585243  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:41.082825  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:43.082945  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:45.582967  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:47.583386  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:50.082327  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:52.082813  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:54.082883  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:56.583127  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:46:58.583221  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:01.082598  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:03.082696  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:05.082725  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:07.082912  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:09.582618  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:11.583326  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:13.583375  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:16.083457  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:18.583503  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:21.083316  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:23.083531  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:25.083582  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:27.583027  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:29.583132  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:32.083360  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:34.084807  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:36.583585  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:39.082977  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:41.583547  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:44.083010  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:46.583573  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:49.085253  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:51.582961  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:54.083245  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:56.583389  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:47:59.084014  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:01.582977  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:03.583662  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:05.583842  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:08.082954  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:10.083667  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:12.582524  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:14.582599  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:16.582700  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:18.583535  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:21.083270  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:23.083662  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:25.583474  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:27.583937  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:30.082350  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:32.082793  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:34.582514  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:36.583228  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:38.583401  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:41.083185  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:43.083246  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:45.583014  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:48.082931  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:50.582764  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:52.583042  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:55.082803  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:57.083015  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:48:59.083905  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:01.582839  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:03.583046  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:06.083408  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:08.583769  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:11.082575  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:13.582836  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:16.083132  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:18.083443  191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:18.586657  191945 pod_ready.go:81] duration metric: took 4m0.017849669s waiting for pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace to be "Ready" ...
	E0531 17:49:18.586682  191945 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0531 17:49:18.586690  191945 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-49qlm" in "kube-system" namespace to be "Ready" ...
	I0531 17:49:20.597662  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:22.597812  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:25.097004  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:27.098107  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:29.597262  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:31.597657  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:33.600025  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:36.097452  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:38.597030  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:40.597784  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:43.097372  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:45.097482  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:47.097511  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:49.597483  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:51.598479  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:54.097256  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:56.098728  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:49:58.597619  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:01.096983  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:03.097896  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:05.598139  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:07.598205  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:10.097686  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:12.597355  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:15.098066  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:17.599280  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:20.098147  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:22.597586  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:24.598028  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:27.097616  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:29.098059  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:31.597426  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:34.097095  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:36.598046  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:39.098179  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:41.597200  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:43.597754  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:45.599936  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:48.097343  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:50.097699  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:52.597760  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:54.598142  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:57.098450  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:50:59.597440  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:02.097680  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:04.097711  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:06.598215  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:09.097830  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:11.098299  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:13.597754  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:15.597856  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:18.097580  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:20.097709  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:22.597788  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:25.097876  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:27.097933  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:29.597703  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:31.597939  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:34.097057  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:36.098091  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:38.099257  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:40.597798  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:42.598296  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:45.097903  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:47.597361  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:49.597897  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:51.597941  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:54.097538  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:56.597683  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:51:58.597729  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:01.097659  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:03.597384  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:05.598050  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:08.097350  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:10.097788  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:12.597579  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:15.096815  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:17.097812  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:19.597872  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:22.098274  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:24.597635  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:27.097550  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:29.598258  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:32.097513  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:34.097756  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:36.598024  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:39.097134  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:41.097648  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:43.597579  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:46.098458  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:48.597693  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:50.597817  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:53.097571  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:55.598122  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:52:58.097671  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:00.596848  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:02.597021  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:04.597970  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:06.598291  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:09.097703  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:11.597830  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:14.098422  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:16.597217  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:18.598282  191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
	I0531 17:53:18.602992  191945 pod_ready.go:81] duration metric: took 4m0.016291869s waiting for pod "calico-node-49qlm" in "kube-system" namespace to be "Ready" ...
	E0531 17:53:18.603015  191945 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0531 17:53:18.603032  191945 pod_ready.go:38] duration metric: took 8m0.045451384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 17:53:18.605621  191945 out.go:177] 
	W0531 17:53:18.607399  191945 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0531 17:53:18.607422  191945 out.go:239] * 
	* 
	W0531 17:53:18.608394  191945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:53:18.609565  191945 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (536.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (350.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134544803s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13131041s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140225959s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:50:56.824771    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:51:05.002216    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:51:10.365049    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123002346s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116142998s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:51:37.785410    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121008922s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133409378s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:52:28.118880    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.115734327s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128439801s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143185369s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:53:57.611079    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:54:11.142967    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.115756545s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:55:43.546329    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132453214s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (350.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (351.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:50:08.923286    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:50:15.863223    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:15.868478    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:15.878706    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:15.898942    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:15.939217    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:16.019469    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:16.179871    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:16.500413    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:17.140988    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:18.421510    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:20.982249    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129692758s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:50:26.103099    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:50:29.404295    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:50:33.065377    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:50:36.344107    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139026219s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121035956s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120410137s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13126583s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.117573862s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:51:54.986385    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125515202s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:52:32.285965    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122723026s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:52:59.705927    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120055302s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.213309208s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
E0531 17:54:25.073298    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.11470746s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:54:38.826652    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:54:48.440474    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136706146s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (351.76s)
E0531 18:01:20.348333    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:01:21.635643    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:02:15.750204    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 18:02:42.268989    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:02:43.431679    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 18:02:43.555925    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:03:57.610565    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 18:04:11.143892    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:04:25.073041    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 18:04:48.440008    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 18:04:58.425620    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:04:59.712221    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:05:15.863417    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 18:05:26.110000    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:05:27.396271    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:05:34.187642    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:06:05.002314    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 18:06:11.487170    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (291.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (4m49.824503226s)

                                                
                                                
-- stdout --
	* [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:53:23.978430  230185 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:53:23.978610  230185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:53:23.978623  230185 out.go:309] Setting ErrFile to fd 2...
	I0531 17:53:23.978631  230185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:53:23.978752  230185 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:53:23.979065  230185 out.go:303] Setting JSON to false
	I0531 17:53:23.980557  230185 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5755,"bootTime":1654013849,"procs":482,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:53:23.980614  230185 start.go:125] virtualization: kvm guest
	I0531 17:53:23.983250  230185 out.go:177] * [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:53:23.984781  230185 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:53:23.984811  230185 notify.go:193] Checking for updates...
	I0531 17:53:23.986216  230185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:53:23.987646  230185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:53:23.989027  230185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:53:23.990433  230185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:53:23.992168  230185 config.go:178] Loaded profile config "bridge-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:53:23.992289  230185 config.go:178] Loaded profile config "enable-default-cni-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:53:23.992452  230185 config.go:178] Loaded profile config "old-k8s-version-20220531174534-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0531 17:53:23.992513  230185 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:53:24.034692  230185 docker.go:137] docker version: linux-20.10.16
	I0531 17:53:24.034816  230185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:53:24.137103  230185 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:53:24.064962067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:53:24.137227  230185 docker.go:254] overlay module found
	I0531 17:53:24.139382  230185 out.go:177] * Using the docker driver based on user configuration
	I0531 17:53:24.140751  230185 start.go:284] selected driver: docker
	I0531 17:53:24.140764  230185 start.go:806] validating driver "docker" against <nil>
	I0531 17:53:24.140781  230185 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:53:24.142111  230185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:53:24.243217  230185 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:53:24.171050156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:53:24.243345  230185 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:53:24.243498  230185 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:53:24.245443  230185 out.go:177] * Using Docker driver with the root privilege
	I0531 17:53:24.246920  230185 cni.go:95] Creating CNI manager for ""
	I0531 17:53:24.246942  230185 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:53:24.246962  230185 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:53:24.246975  230185 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:53:24.246987  230185 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:53:24.247003  230185 start_flags.go:306] config:
	{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:53:24.248801  230185 out.go:177] * Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	I0531 17:53:24.250163  230185 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:53:24.251398  230185 out.go:177] * Pulling base image ...
	I0531 17:53:24.252741  230185 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:53:24.252809  230185 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:53:24.252887  230185 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 17:53:24.252927  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json: {Name:mk68933a542ead1304dcc5ee38022376521a150a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:24.253008  230185 cache.go:107] acquiring lock: {Name:mke7c3123bbb887802876b6038e785eff1d65578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253096  230185 cache.go:107] acquiring lock: {Name:mka8d6fd8013f251c85f4bca8a18522e173be81e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253143  230185 cache.go:107] acquiring lock: {Name:mk59854aac2611f794ffa59524077b81afbc7de4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253212  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0531 17:53:24.253224  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0531 17:53:24.253236  230185 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 93.307µs
	I0531 17:53:24.253240  230185 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 154.175µs
	I0531 17:53:24.253259  230185 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0531 17:53:24.253262  230185 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0531 17:53:24.253268  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 17:53:24.253287  230185 cache.go:107] acquiring lock: {Name:mk4a95c9ed8757a79d1e9fa1e44efcaead7631e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253295  230185 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 301.55µs
	I0531 17:53:24.253306  230185 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 17:53:24.253287  230185 cache.go:107] acquiring lock: {Name:mk92196aa514c10ef84dd2326a35399f7c3719a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253310  230185 cache.go:107] acquiring lock: {Name:mkccfd735c16da1ed9ea4fc459feb477365b33a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253119  230185 cache.go:107] acquiring lock: {Name:mk598b9f501113e758a5b1053c8a9a41e87e7c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253345  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0531 17:53:24.253358  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0531 17:53:24.253363  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0531 17:53:24.253359  230185 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 75.219µs
	I0531 17:53:24.253372  230185 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0531 17:53:24.253374  230185 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 260.527µs
	I0531 17:53:24.253372  230185 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 87.322µs
	I0531 17:53:24.253384  230185 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0531 17:53:24.253385  230185 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0531 17:53:24.253404  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0531 17:53:24.253400  230185 cache.go:107] acquiring lock: {Name:mk37d69d4525de4b98ff3597b4269e1680132b96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.253418  230185 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 110.12µs
	I0531 17:53:24.253430  230185 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0531 17:53:24.253434  230185 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0531 17:53:24.253443  230185 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 46.275µs
	I0531 17:53:24.253449  230185 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0531 17:53:24.253470  230185 cache.go:87] Successfully saved all images to host disk.
	I0531 17:53:24.301754  230185 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:53:24.301785  230185 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:53:24.301804  230185 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:53:24.301855  230185 start.go:352] acquiring machines lock for no-preload-20220531175323-6903: {Name:mk8635283b759be2fcd7aacbafc64b0c778ff5b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:53:24.301989  230185 start.go:356] acquired machines lock for "no-preload-20220531175323-6903" in 111.48µs
	I0531 17:53:24.302018  230185 start.go:91] Provisioning new machine with config: &{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:53:24.302135  230185 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:53:24.304303  230185 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:53:24.304582  230185 start.go:165] libmachine.API.Create for "no-preload-20220531175323-6903" (driver="docker")
	I0531 17:53:24.304615  230185 client.go:168] LocalClient.Create starting
	I0531 17:53:24.304709  230185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:53:24.304762  230185 main.go:134] libmachine: Decoding PEM data...
	I0531 17:53:24.304795  230185 main.go:134] libmachine: Parsing certificate...
	I0531 17:53:24.304879  230185 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:53:24.304906  230185 main.go:134] libmachine: Decoding PEM data...
	I0531 17:53:24.304926  230185 main.go:134] libmachine: Parsing certificate...
	I0531 17:53:24.305341  230185 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:53:24.336470  230185 cli_runner.go:211] docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:53:24.336542  230185 network_create.go:272] running [docker network inspect no-preload-20220531175323-6903] to gather additional debugging logs...
	I0531 17:53:24.336572  230185 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903
	W0531 17:53:24.366659  230185 cli_runner.go:211] docker network inspect no-preload-20220531175323-6903 returned with exit code 1
	I0531 17:53:24.366701  230185 network_create.go:275] error running [docker network inspect no-preload-20220531175323-6903]: docker network inspect no-preload-20220531175323-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220531175323-6903
	I0531 17:53:24.366728  230185 network_create.go:277] output of [docker network inspect no-preload-20220531175323-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220531175323-6903
	
	** /stderr **
	I0531 17:53:24.366778  230185 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:53:24.397345  230185 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-09a226de47ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e6:98:b2:7a}}
	I0531 17:53:24.397861  230185 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-ffba0413ceee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c8:ab:51:43}}
	I0531 17:53:24.398458  230185 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0011c82c0] misses:0}
	I0531 17:53:24.398498  230185 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:53:24.398517  230185 network_create.go:115] attempt to create docker network no-preload-20220531175323-6903 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0531 17:53:24.398558  230185 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20220531175323-6903
	I0531 17:53:24.466595  230185 network_create.go:99] docker network no-preload-20220531175323-6903 192.168.67.0/24 created
	I0531 17:53:24.466624  230185 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20220531175323-6903" container
	I0531 17:53:24.466684  230185 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:53:24.500385  230185 cli_runner.go:164] Run: docker volume create no-preload-20220531175323-6903 --label name.minikube.sigs.k8s.io=no-preload-20220531175323-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:53:24.531434  230185 oci.go:103] Successfully created a docker volume no-preload-20220531175323-6903
	I0531 17:53:24.531504  230185 cli_runner.go:164] Run: docker run --rm --name no-preload-20220531175323-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220531175323-6903 --entrypoint /usr/bin/test -v no-preload-20220531175323-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:53:25.067389  230185 oci.go:107] Successfully prepared a docker volume no-preload-20220531175323-6903
	I0531 17:53:25.067428  230185 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	W0531 17:53:25.067547  230185 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:53:25.067630  230185 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:53:25.168927  230185 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220531175323-6903 --name no-preload-20220531175323-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220531175323-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220531175323-6903 --network no-preload-20220531175323-6903 --ip 192.168.67.2 --volume no-preload-20220531175323-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:53:25.548930  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Running}}
	I0531 17:53:25.583183  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:53:25.615235  230185 cli_runner.go:164] Run: docker exec no-preload-20220531175323-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:53:25.675917  230185 oci.go:247] the created container "no-preload-20220531175323-6903" has a running status.
	I0531 17:53:25.675949  230185 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa...
	I0531 17:53:25.834687  230185 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:53:25.932955  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:53:25.968304  230185 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:53:25.968332  230185 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220531175323-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:53:26.068228  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:53:26.104521  230185 machine.go:88] provisioning docker machine ...
	I0531 17:53:26.104572  230185 ubuntu.go:169] provisioning hostname "no-preload-20220531175323-6903"
	I0531 17:53:26.104629  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:26.138030  230185 main.go:134] libmachine: Using SSH client type: native
	I0531 17:53:26.138242  230185 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0531 17:53:26.138268  230185 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220531175323-6903 && echo "no-preload-20220531175323-6903" | sudo tee /etc/hostname
	I0531 17:53:26.258735  230185 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220531175323-6903
	
	I0531 17:53:26.258812  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:26.296889  230185 main.go:134] libmachine: Using SSH client type: native
	I0531 17:53:26.297060  230185 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0531 17:53:26.297093  230185 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220531175323-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220531175323-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220531175323-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:53:26.410905  230185 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:53:26.410941  230185 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:53:26.410965  230185 ubuntu.go:177] setting up certificates
	I0531 17:53:26.410999  230185 provision.go:83] configureAuth start
	I0531 17:53:26.411061  230185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 17:53:26.444738  230185 provision.go:138] copyHostCerts
	I0531 17:53:26.444808  230185 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:53:26.444823  230185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:53:26.444880  230185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:53:26.444980  230185 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:53:26.444998  230185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:53:26.445036  230185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:53:26.445132  230185 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:53:26.445141  230185 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:53:26.445184  230185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:53:26.445298  230185 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220531175323-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220531175323-6903]
	I0531 17:53:26.776595  230185 provision.go:172] copyRemoteCerts
	I0531 17:53:26.776646  230185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:53:26.776679  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:26.811070  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:53:26.902626  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:53:26.921262  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 17:53:26.938829  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 17:53:26.957970  230185 provision.go:86] duration metric: configureAuth took 546.953294ms
	I0531 17:53:26.957999  230185 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:53:26.958183  230185 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:53:26.958195  230185 machine.go:91] provisioned docker machine in 853.654798ms
	I0531 17:53:26.958201  230185 client.go:171] LocalClient.Create took 2.65357659s
	I0531 17:53:26.958220  230185 start.go:173] duration metric: libmachine.API.Create for "no-preload-20220531175323-6903" took 2.653639975s
	I0531 17:53:26.958234  230185 start.go:306] post-start starting for "no-preload-20220531175323-6903" (driver="docker")
	I0531 17:53:26.958244  230185 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:53:26.958293  230185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:53:26.958337  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:26.991325  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:53:27.074845  230185 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:53:27.077714  230185 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:53:27.077735  230185 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:53:27.077745  230185 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:53:27.077751  230185 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:53:27.077759  230185 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:53:27.077807  230185 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:53:27.077872  230185 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:53:27.077960  230185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:53:27.084602  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:53:27.101712  230185 start.go:309] post-start completed in 143.464351ms
	I0531 17:53:27.102004  230185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 17:53:27.134828  230185 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 17:53:27.135059  230185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:53:27.135100  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:27.166729  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:53:27.243299  230185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:53:27.246983  230185 start.go:134] duration metric: createHost completed in 2.944833917s
	I0531 17:53:27.247006  230185 start.go:81] releasing machines lock for "no-preload-20220531175323-6903", held for 2.945002904s
	I0531 17:53:27.247083  230185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 17:53:27.278393  230185 ssh_runner.go:195] Run: systemctl --version
	I0531 17:53:27.278449  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:27.278481  230185 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:53:27.278541  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:53:27.311540  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:53:27.313172  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:53:27.412232  230185 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:53:27.422461  230185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:53:27.431103  230185 docker.go:187] disabling docker service ...
	I0531 17:53:27.431165  230185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:53:27.446847  230185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:53:27.455152  230185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:53:27.529451  230185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:53:27.604345  230185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:53:27.613989  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:53:27.628729  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:53:27.642098  230185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:53:27.648022  230185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:53:27.654366  230185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:53:27.725260  230185 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:53:27.788525  230185 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:53:27.788585  230185 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:53:27.791993  230185 start.go:468] Will wait 60s for crictl version
	I0531 17:53:27.792044  230185 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:53:27.817134  230185 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:53:27.817185  230185 ssh_runner.go:195] Run: containerd --version
	I0531 17:53:27.844597  230185 ssh_runner.go:195] Run: containerd --version
	I0531 17:53:27.872614  230185 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:53:27.873910  230185 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:53:27.904926  230185 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 17:53:27.908171  230185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:53:27.919223  230185 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:53:27.920737  230185 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:53:27.920784  230185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:53:27.943278  230185 containerd.go:603] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.23.6". assuming images are not preloaded.
	I0531 17:53:27.943302  230185 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.23.6 k8s.gcr.io/kube-controller-manager:v1.23.6 k8s.gcr.io/kube-scheduler:v1.23.6 k8s.gcr.io/kube-proxy:v1.23.6 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 17:53:27.943375  230185 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:53:27.943597  230185 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.6
	I0531 17:53:27.943754  230185 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.6
	I0531 17:53:27.943893  230185 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.6
	I0531 17:53:27.944028  230185 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.6
	I0531 17:53:27.944153  230185 image.go:134] retrieving image: k8s.gcr.io/pause:3.6
	I0531 17:53:27.944301  230185 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.1-0
	I0531 17:53:27.944599  230185 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0531 17:53:27.963600  230185 image.go:176] found k8s.gcr.io/pause:3.6 locally: &{UncompressedImageCore:0xc00170a0a0 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:27.963651  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.6"
	I0531 17:53:27.999749  230185 cache_images.go:116] "k8s.gcr.io/pause:3.6" needs transfer: "k8s.gcr.io/pause:3.6" does not exist at hash "6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee" in container runtime
	I0531 17:53:27.999812  230185 cri.go:216] Removing image: k8s.gcr.io/pause:3.6
	I0531 17:53:27.999854  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:28.004087  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.6
	I0531 17:53:28.048407  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6
	I0531 17:53:28.048499  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.6
	I0531 17:53:28.054823  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%s %y" /var/lib/minikube/images/pause_3.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory
	I0531 17:53:28.054851  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (301056 bytes)
	I0531 17:53:28.095828  230185 containerd.go:287] Loading image: /var/lib/minikube/images/pause_3.6
	I0531 17:53:28.095890  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.6
	I0531 17:53:28.361373  230185 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc00122c028 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:28.361452  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 17:53:28.391585  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 from cache
	I0531 17:53:28.415468  230185 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0531 17:53:28.415544  230185 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:53:28.415697  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:28.422202  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:53:28.462974  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0531 17:53:28.463069  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0531 17:53:28.467351  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0531 17:53:28.467384  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0531 17:53:28.561866  230185 containerd.go:287] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0531 17:53:28.562294  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0531 17:53:28.651390  230185 image.go:176] found k8s.gcr.io/coredns/coredns:v1.8.6 locally: &{UncompressedImageCore:0xc0011c8018 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:28.651472  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0531 17:53:28.818439  230185 image.go:176] found k8s.gcr.io/kube-scheduler:v1.23.6 locally: &{UncompressedImageCore:0xc0011c8028 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:28.818519  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.23.6"
	I0531 17:53:29.245476  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0531 17:53:29.245596  230185 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0531 17:53:29.245627  230185 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0531 17:53:29.245631  230185 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.23.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.23.6" does not exist at hash "595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0" in container runtime
	I0531 17:53:29.245657  230185 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.23.6
	I0531 17:53:29.245667  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:29.245685  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:29.252304  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0531 17:53:29.252809  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.23.6
	I0531 17:53:29.289796  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6
	I0531 17:53:29.289862  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0531 17:53:29.289947  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0531 17:53:29.289969  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.23.6
	I0531 17:53:29.313403  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0531 17:53:29.313447  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0531 17:53:29.313525  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.6: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.23.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.6': No such file or directory
	I0531 17:53:29.313548  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 --> /var/lib/minikube/images/kube-scheduler_v1.23.6 (15136768 bytes)
	I0531 17:53:29.416240  230185 containerd.go:287] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0531 17:53:29.416344  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0531 17:53:29.579725  230185 image.go:176] found k8s.gcr.io/kube-proxy:v1.23.6 locally: &{UncompressedImageCore:0xc000122238 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:29.579787  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.23.6"
	I0531 17:53:29.652749  230185 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.23.6 locally: &{UncompressedImageCore:0xc000122278 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:29.652826  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.23.6"
	I0531 17:53:29.808248  230185 image.go:176] found k8s.gcr.io/kube-apiserver:v1.23.6 locally: &{UncompressedImageCore:0xc00048a180 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:29.808325  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.23.6"
	I0531 17:53:30.203219  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0531 17:53:30.203283  230185 containerd.go:287] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.6
	I0531 17:53:30.203334  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.23.6
	I0531 17:53:30.203447  230185 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.23.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.23.6" does not exist at hash "4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47" in container runtime
	I0531 17:53:30.203484  230185 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.23.6
	I0531 17:53:30.203516  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:30.203626  230185 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.23.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.23.6" does not exist at hash "df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657" in container runtime
	I0531 17:53:30.203661  230185 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.23.6
	I0531 17:53:30.203686  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:30.203773  230185 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.23.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.23.6" does not exist at hash "8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6" in container runtime
	I0531 17:53:30.203807  230185 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.23.6
	I0531 17:53:30.203827  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:30.921315  230185 image.go:176] found k8s.gcr.io/etcd:3.5.1-0 locally: &{UncompressedImageCore:0xc00048a130 lock:{state:0 sema:0} manifest:<nil>}
	I0531 17:53:30.921388  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.1-0"
	I0531 17:53:31.161513  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.23.6
	I0531 17:53:31.161549  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 from cache
	I0531 17:53:31.161621  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.23.6
	I0531 17:53:31.161662  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.23.6
	I0531 17:53:31.161666  230185 cache_images.go:116] "k8s.gcr.io/etcd:3.5.1-0" needs transfer: "k8s.gcr.io/etcd:3.5.1-0" does not exist at hash "25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d" in container runtime
	I0531 17:53:31.161757  230185 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.1-0
	I0531 17:53:31.161807  230185 ssh_runner.go:195] Run: which crictl
	I0531 17:53:31.206570  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6
	I0531 17:53:31.206638  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6
	I0531 17:53:31.206659  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.23.6
	I0531 17:53:31.206664  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6
	I0531 17:53:31.206697  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.23.6
	I0531 17:53:31.206729  230185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.1-0
	I0531 17:53:31.206731  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.23.6
	I0531 17:53:31.211003  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.6: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.23.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.6': No such file or directory
	I0531 17:53:31.211050  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 --> /var/lib/minikube/images/kube-proxy_v1.23.6 (39280128 bytes)
	I0531 17:53:31.234208  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.6: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.23.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.6': No such file or directory
	I0531 17:53:31.234256  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 --> /var/lib/minikube/images/kube-controller-manager_v1.23.6 (30176256 bytes)
	I0531 17:53:31.234288  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.6: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.23.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.6': No such file or directory
	I0531 17:53:31.234314  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 --> /var/lib/minikube/images/kube-apiserver_v1.23.6 (32604160 bytes)
	I0531 17:53:31.234358  230185 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0
	I0531 17:53:31.234458  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.1-0
	I0531 17:53:31.248086  230185 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory
	I0531 17:53:31.248117  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (98891776 bytes)
	I0531 17:53:31.504361  230185 containerd.go:287] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.6
	I0531 17:53:31.504434  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.6
	I0531 17:53:32.861664  230185 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.6: (1.357203998s)
	I0531 17:53:32.861692  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 from cache
	I0531 17:53:32.861714  230185 containerd.go:287] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.6
	I0531 17:53:32.861755  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.6
	I0531 17:53:34.218697  230185 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.6: (1.356917071s)
	I0531 17:53:34.218731  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 from cache
	I0531 17:53:34.218760  230185 containerd.go:287] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.6
	I0531 17:53:34.218802  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.6
	I0531 17:53:35.762567  230185 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.6: (1.54373622s)
	I0531 17:53:35.762595  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 from cache
	I0531 17:53:35.762620  230185 containerd.go:287] Loading image: /var/lib/minikube/images/etcd_3.5.1-0
	I0531 17:53:35.762667  230185 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0
	I0531 17:53:39.419220  230185 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0: (3.656526587s)
	I0531 17:53:39.419244  230185 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 from cache
	I0531 17:53:39.419273  230185 cache_images.go:123] Successfully loaded all cached images
	I0531 17:53:39.419277  230185 cache_images.go:92] LoadImages completed in 11.475963995s
	I0531 17:53:39.419315  230185 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:53:39.443471  230185 cni.go:95] Creating CNI manager for ""
	I0531 17:53:39.443495  230185 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:53:39.443513  230185 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:53:39.443531  230185 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220531175323-6903 NodeName:no-preload-20220531175323-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:53:39.443685  230185 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220531175323-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:53:39.443794  230185 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220531175323-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 17:53:39.443860  230185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:53:39.450773  230185 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.6: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.23.6': No such file or directory
	
	Initiating transfer...
	I0531 17:53:39.450819  230185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.6
	I0531 17:53:39.457416  230185 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubelet.sha256
	I0531 17:53:39.457447  230185 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubeadm.sha256
	I0531 17:53:39.457483  230185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:53:39.457510  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubeadm
	I0531 17:53:39.457513  230185 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl.sha256
	I0531 17:53:39.457601  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubectl
	I0531 17:53:39.460980  230185 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.6/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.6/kubectl': No such file or directory
	I0531 17:53:39.461007  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.23.6/kubectl --> /var/lib/minikube/binaries/v1.23.6/kubectl (46596096 bytes)
	I0531 17:53:39.468259  230185 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.6/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.6/kubeadm': No such file or directory
	I0531 17:53:39.468292  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.23.6/kubeadm --> /var/lib/minikube/binaries/v1.23.6/kubeadm (45219840 bytes)
	I0531 17:53:39.468340  230185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubelet
	I0531 17:53:39.482474  230185 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.6/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.23.6/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.6/kubelet': No such file or directory
	I0531 17:53:39.482507  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/linux/amd64/v1.23.6/kubelet --> /var/lib/minikube/binaries/v1.23.6/kubelet (124542016 bytes)
	I0531 17:53:39.849391  230185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:53:39.856096  230185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0531 17:53:39.868292  230185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:53:39.881755  230185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0531 17:53:39.894740  230185 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:53:39.897479  230185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:53:39.906141  230185 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903 for IP: 192.168.67.2
	I0531 17:53:39.906221  230185 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:53:39.906257  230185 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:53:39.906316  230185 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key
	I0531 17:53:39.906332  230185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.crt with IP's: []
	I0531 17:53:40.148144  230185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.crt ...
	I0531 17:53:40.148176  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.crt: {Name:mk5f59c6bfb59e958e47e4d6786aa2707db21de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.148364  230185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key ...
	I0531 17:53:40.148379  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key: {Name:mkfe89feebeeb2b9c325339e6b3f0157c26c03f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.148487  230185 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e
	I0531 17:53:40.148506  230185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:53:40.436333  230185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt.c7fa3a9e ...
	I0531 17:53:40.436363  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt.c7fa3a9e: {Name:mke8f93dc2256abe217c1e0a8a46315a766f30f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.436530  230185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e ...
	I0531 17:53:40.436548  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e: {Name:mk56bfee6a59b1164862f20b8a192dcd4784ee0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.436638  230185 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt
	I0531 17:53:40.436692  230185 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key
	I0531 17:53:40.436739  230185 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key
	I0531 17:53:40.436753  230185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt with IP's: []
	I0531 17:53:40.772136  230185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt ...
	I0531 17:53:40.772164  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt: {Name:mkf3943066af142f0636fd5061b72400118ef469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.772327  230185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key ...
	I0531 17:53:40.772339  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key: {Name:mkbb8a7b35d4798419d2ab4296e2ffebb9a65001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:53:40.772500  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:53:40.772532  230185 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:53:40.772545  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:53:40.772571  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:53:40.772595  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:53:40.772620  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:53:40.772661  230185 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:53:40.773156  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:53:40.791052  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:53:40.807596  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:53:40.823768  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 17:53:40.840550  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:53:40.860197  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:53:40.876190  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:53:40.893044  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:53:40.908900  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:53:40.925069  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:53:40.942882  230185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:53:40.959438  230185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:53:40.971920  230185 ssh_runner.go:195] Run: openssl version
	I0531 17:53:40.976882  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:53:40.984174  230185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:53:40.987028  230185 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:53:40.987073  230185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:53:40.992073  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:53:40.999199  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:53:41.007543  230185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:53:41.010370  230185 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:53:41.010418  230185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:53:41.014985  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:53:41.021926  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:53:41.030835  230185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:53:41.037045  230185 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:53:41.037101  230185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:53:41.042877  230185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:53:41.055488  230185 kubeadm.go:395] StartCluster: {Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:53:41.055612  230185 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:53:41.055676  230185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:53:41.081109  230185 cri.go:87] found id: ""
	I0531 17:53:41.081167  230185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:53:41.088163  230185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:53:41.094797  230185 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:53:41.094835  230185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:53:41.101089  230185 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:53:41.101125  230185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:53:41.379652  230185 out.go:204]   - Generating certificates and keys ...
	I0531 17:53:43.993650  230185 out.go:204]   - Booting up control plane ...
	I0531 17:53:59.531768  230185 out.go:204]   - Configuring RBAC rules ...
	I0531 17:53:59.942146  230185 cni.go:95] Creating CNI manager for ""
	I0531 17:53:59.942169  230185 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:53:59.943734  230185 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 17:53:59.944955  230185 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 17:53:59.948653  230185 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:53:59.948671  230185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 17:53:59.961635  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:54:00.722052  230185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:54:00.722097  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:00.722098  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T17_54_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:00.814768  230185 ops.go:34] apiserver oom_adj: -16
	I0531 17:54:00.814779  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:01.367753  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:01.868039  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:02.367284  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:02.867972  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:03.368109  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:03.867306  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:04.367750  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:04.867780  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:05.368066  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:05.868061  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:06.368005  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:06.867265  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:07.367705  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:07.867265  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:08.367285  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:08.867884  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:09.367700  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:09.868156  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:10.367170  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:10.867419  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:11.367647  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:11.867995  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:12.367802  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:12.868150  230185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:54:13.121651  230185 kubeadm.go:1045] duration metric: took 12.399596207s to wait for elevateKubeSystemPrivileges.
	I0531 17:54:13.121678  230185 kubeadm.go:397] StartCluster complete in 32.066200286s
	I0531 17:54:13.121703  230185 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:54:13.121788  230185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:54:13.123112  230185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:54:13.636857  230185 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 17:54:13.636930  230185 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:54:13.636959  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:54:13.638516  230185 out.go:177] * Verifying Kubernetes components...
	I0531 17:54:13.637207  230185 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:54:13.637221  230185 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:54:13.639794  230185 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 17:54:13.639805  230185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:54:13.639816  230185 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 17:54:13.639828  230185 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:54:13.639828  230185 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 17:54:13.639852  230185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 17:54:13.639871  230185 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 17:54:13.640196  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:54:13.640323  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:54:13.680826  230185 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:54:13.682245  230185 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:54:13.682269  230185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:54:13.682322  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:54:13.687981  230185 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 17:54:13.688015  230185 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:54:13.688047  230185 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 17:54:13.688595  230185 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 17:54:13.720807  230185 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 17:54:13.721076  230185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:54:13.723572  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:54:13.733286  230185 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:54:13.733320  230185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:54:13.733374  230185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 17:54:13.775010  230185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 17:54:13.901944  230185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:54:14.016105  230185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:54:14.031260  230185 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 17:54:14.247980  230185 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 17:54:14.249402  230185 addons.go:417] enableAddons completed in 612.181865ms
	I0531 17:54:15.727845  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:18.227654  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:20.727736  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:23.227020  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:25.228004  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:27.728002  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:30.226935  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:32.227239  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:34.727049  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:36.727987  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:39.227190  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:41.227894  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:43.727202  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:45.727702  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:48.227819  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:50.228238  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:52.726750  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:54.726901  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:56.727702  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:54:59.227843  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:01.727986  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:04.227133  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:06.227709  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:08.727394  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:10.728136  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:13.227894  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:15.727572  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:17.728160  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:20.227022  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:22.227197  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:24.227257  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:26.727026  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:29.227991  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:31.230203  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:33.727745  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:36.227979  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:38.727247  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:40.727713  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:43.229527  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:45.726924  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:47.727917  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:50.227172  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:52.227771  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:54.727507  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:57.226720  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:55:59.228216  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:01.727843  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:03.728003  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:05.728379  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:07.996964  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:10.228074  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:12.727278  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:15.227876  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:17.802254  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:20.227821  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:22.727263  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:24.727660  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:27.227033  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:29.227948  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:31.228307  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:33.727689  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:35.727884  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:38.227543  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:40.727726  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:43.226875  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:45.227722  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:47.726708  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:49.727537  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:52.227351  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:54.727443  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:57.227430  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:59.726872  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:01.727580  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:03.727723  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:06.226943  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:08.227336  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:10.227620  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:12.228117  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:14.726998  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:16.727759  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:19.226771  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:21.227348  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:23.227471  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:25.227788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:27.726749  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:29.728837  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:32.227880  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:34.727064  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:37.226651  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:39.226899  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:41.227775  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:43.227898  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:45.727788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:48.227436  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:50.727048  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:52.727666  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:54.727760  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:57.227431  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:59.727047  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:01.727196  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:04.226687  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:06.227052  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:08.227935  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:10.726683  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:12.727527  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:13.729365  230185 node_ready.go:38] duration metric: took 4m0.008516004s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 17:58:13.731613  230185 out.go:177] 
	W0531 17:58:13.733108  230185 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 17:58:13.733126  230185 out.go:239] * 
	* 
	W0531 17:58:13.733818  230185 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:58:13.735217  230185 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531175323-6903
helpers_test.go:235: (dbg) docker inspect no-preload-20220531175323-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d",
	        "Created": "2022-05-31T17:53:25.199469079Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230732,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:53:25.538304199Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d-json.log",
	        "Name": "/no-preload-20220531175323-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531175323-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531175323-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531175323-6903",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531175323-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531175323-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6413ea608901d520cb420be1567e8fbd6f13d85f29fc8ae60c4095bc5f68676",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6413ea60890",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531175323-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4f33d13fefc",
	                        "no-preload-20220531175323-6903"
	                    ],
	                    "NetworkID": "b2391a84ebd8e16dd2e9aca80777d6d03045cffc9cfc8290f45a61a1473c3244",
	                    "EndpointID": "81cd7594f26487ced42b2407b71e68ba6220c3d831ffa8d20b6ab5ac89aa38f6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220531175323-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p kindnet-20220531174029-6903                    | kindnet-20220531174029-6903               | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:45 UTC |
	| start   | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:43 UTC | 31 May 22 17:45 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:45 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:45 UTC |
	| start   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |                |                     |                     |
	|         | --disable-driver-mounts                           |                                           |         |                |                     |                     |
	|         | --keep-context=false                              |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:44 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:49 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| logs    | calico-20220531174030-6903                        | calico-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p calico-20220531174030-6903                     | calico-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                | disable-driver-mounts-20220531175323-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903         |                                           |         |                |                     |                     |
	| start   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |                |                     |                     |
	|         | --disable-driver-mounts                           |                                           |         |                |                     |                     |
	|         | --keep-context=false                              |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| delete  | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	| delete  | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903            | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                        | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:56:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:56:04.761070  243743 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:56:04.761198  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761209  243743 out.go:309] Setting ErrFile to fd 2...
	I0531 17:56:04.761213  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761320  243743 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:56:04.761604  243743 out.go:303] Setting JSON to false
	I0531 17:56:04.763369  243743 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5916,"bootTime":1654013849,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:56:04.763439  243743 start.go:125] virtualization: kvm guest
	I0531 17:56:04.765860  243743 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:56:04.767852  243743 notify.go:193] Checking for updates...
	I0531 17:56:04.767855  243743 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:56:04.769545  243743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:56:04.771229  243743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:04.772729  243743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:56:04.774183  243743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:56:04.776078  243743 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776263  243743 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776404  243743 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776470  243743 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:56:04.818427  243743 docker.go:137] docker version: linux-20.10.16
	I0531 17:56:04.818525  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:04.933426  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.851840173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:04.933610  243743 docker.go:254] overlay module found
	I0531 17:56:04.936012  243743 out.go:177] * Using the docker driver based on user configuration
	I0531 17:56:04.937461  243743 start.go:284] selected driver: docker
	I0531 17:56:04.937479  243743 start.go:806] validating driver "docker" against <nil>
	I0531 17:56:04.937498  243743 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:56:04.938476  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:05.050928  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.970943421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:05.051044  243743 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:56:05.051282  243743 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:56:05.053473  243743 out.go:177] * Using Docker driver with the root privilege
	I0531 17:56:05.054914  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:05.054932  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:05.054948  243743 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054953  243743 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054960  243743 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:56:05.054974  243743 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:05.056598  243743 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 17:56:05.058015  243743 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:56:05.059392  243743 out.go:177] * Pulling base image ...
	I0531 17:56:05.060693  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:05.060727  243743 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:56:05.060733  243743 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 17:56:05.060745  243743 cache.go:57] Caching tarball of preloaded images
	I0531 17:56:05.060946  243743 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 17:56:05.060966  243743 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 17:56:05.061099  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:05.061132  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json: {Name:mk012ae752926ff69a2c9dc59c259dc1c0bd12d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:05.116443  243743 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:56:05.116476  243743 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:56:05.116493  243743 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:56:05.116542  243743 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:05.116688  243743 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 125.335µs
	I0531 17:56:05.116714  243743 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:05.116812  243743 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:56:03.023989  242818 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:56:03.024207  242818 start.go:165] libmachine.API.Create for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 17:56:03.024237  242818 client.go:168] LocalClient.Create starting
	I0531 17:56:03.024313  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:56:03.024346  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:03.024362  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:03.024421  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:56:03.024440  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:03.024449  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:03.024728  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:56:03.055357  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:56:03.055434  242818 network_create.go:272] running [docker network inspect newest-cni-20220531175602-6903] to gather additional debugging logs...
	I0531 17:56:03.055459  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903
	W0531 17:56:03.085005  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:03.085045  242818 network_create.go:275] error running [docker network inspect newest-cni-20220531175602-6903]: docker network inspect newest-cni-20220531175602-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220531175602-6903
	I0531 17:56:03.085059  242818 network_create.go:277] output of [docker network inspect newest-cni-20220531175602-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220531175602-6903
	
	** /stderr **
	I0531 17:56:03.085110  242818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:03.115283  242818 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-09a226de47ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e6:98:b2:7a}}
	I0531 17:56:03.115975  242818 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000010158] misses:0}
	I0531 17:56:03.116016  242818 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:56:03.116030  242818 network_create.go:115] attempt to create docker network newest-cni-20220531175602-6903 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 17:56:03.116075  242818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220531175602-6903
	I0531 17:56:03.183655  242818 network_create.go:99] docker network newest-cni-20220531175602-6903 192.168.58.0/24 created
	I0531 17:56:03.183692  242818 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20220531175602-6903" container
	I0531 17:56:03.183783  242818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:56:03.223995  242818 cli_runner.go:164] Run: docker volume create newest-cni-20220531175602-6903 --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:56:03.258196  242818 oci.go:103] Successfully created a docker volume newest-cni-20220531175602-6903
	I0531 17:56:03.258284  242818 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220531175602-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --entrypoint /usr/bin/test -v newest-cni-20220531175602-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:56:04.358045  242818 cli_runner.go:217] Completed: docker run --rm --name newest-cni-20220531175602-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --entrypoint /usr/bin/test -v newest-cni-20220531175602-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (1.099719827s)
	I0531 17:56:04.358080  242818 oci.go:107] Successfully prepared a docker volume newest-cni-20220531175602-6903
	I0531 17:56:04.358118  242818 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:04.358142  242818 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:56:04.358190  242818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:56:05.728379  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:07.996964  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:04.568475  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:06.568763  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:09.068463  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:05.119757  243743 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:56:05.119962  243743 start.go:165] libmachine.API.Create for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:05.119989  243743 client.go:168] LocalClient.Create starting
	I0531 17:56:05.120055  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:56:05.120089  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120110  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120167  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:56:05.120184  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120193  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120480  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:56:05.151452  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:56:05.151507  243743 network_create.go:272] running [docker network inspect embed-certs-20220531175604-6903] to gather additional debugging logs...
	I0531 17:56:05.151528  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903
	W0531 17:56:05.183019  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 returned with exit code 1
	I0531 17:56:05.183050  243743 network_create.go:275] error running [docker network inspect embed-certs-20220531175604-6903]: docker network inspect embed-certs-20220531175604-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220531175604-6903
	I0531 17:56:05.183073  243743 network_create.go:277] output of [docker network inspect embed-certs-20220531175604-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220531175604-6903
	
	** /stderr **
	I0531 17:56:05.183114  243743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:05.218722  243743 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e2d8] misses:0}
	I0531 17:56:05.218771  243743 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:56:05.218788  243743 network_create.go:115] attempt to create docker network embed-certs-20220531175604-6903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 17:56:05.218827  243743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220531175604-6903
	I0531 17:56:05.286551  243743 network_create.go:99] docker network embed-certs-20220531175604-6903 192.168.49.0/24 created
	I0531 17:56:05.286588  243743 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20220531175604-6903" container
	I0531 17:56:05.286654  243743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:56:05.321293  243743 cli_runner.go:164] Run: docker volume create embed-certs-20220531175604-6903 --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:56:05.353388  243743 oci.go:103] Successfully created a docker volume embed-certs-20220531175604-6903
	I0531 17:56:05.353454  243743 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:56:10.010528  242818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (5.652255027s)
	I0531 17:56:10.010581  242818 kic.go:188] duration metric: took 5.652433 seconds to extract preloaded images to volume
	W0531 17:56:10.010767  242818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:56:10.010898  242818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:56:10.133622  242818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	W0531 17:56:10.199092  242818 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 returned with exit code 125
	I0531 17:56:10.199198  242818 client.go:171] LocalClient.Create took 7.174948316s
	I0531 17:56:12.200394  242818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:56:12.200481  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:12.233921  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:12.234061  242818 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:12.510494  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:12.545210  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:12.545331  242818 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:10.228074  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:12.727278  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:11.567556  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:14.067647  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:10.414315  243743 cli_runner.go:217] Completed: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (5.060822572s)
	I0531 17:56:10.414344  243743 oci.go:107] Successfully prepared a docker volume embed-certs-20220531175604-6903
	I0531 17:56:10.414377  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:10.414397  243743 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:56:10.414445  243743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:56:13.086142  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.119793  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:13.119929  242818 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:13.775273  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.805802  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	W0531 17:56:13.805936  242818 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 17:56:13.805957  242818 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:13.806040  242818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:56:13.806090  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.835894  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:13.836013  242818 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.068325  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.099888  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.100007  242818 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.545599  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.577377  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.577470  242818 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.895935  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.927184  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.927294  242818 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:15.482091  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:15.513112  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	W0531 17:56:15.513206  242818 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 17:56:15.513221  242818 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:15.513227  242818 start.go:134] duration metric: createHost completed in 12.491379014s
	I0531 17:56:15.513242  242818 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 12.491483099s
	W0531 17:56:15.513281  242818 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit statu
s 125
	stdout:
	d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a
	
	stderr:
	docker: Error response from daemon: network newest-cni-20220531175602-6903 not found.
	I0531 17:56:15.513666  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	W0531 17:56:15.544733  242818 start.go:604] delete host: Docker machine "newest-cni-20220531175602-6903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0531 17:56:15.544907  242818 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb85
75418: exit status 125
	stdout:
	d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a
	
	stderr:
	docker: Error response from daemon: network newest-cni-20220531175602-6903 not found.
	
	I0531 17:56:15.544924  242818 start.go:614] Will try again in 5 seconds ...
	I0531 17:56:15.227876  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:17.802254  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:16.149146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:18.567711  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:17.822157  243743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.407658192s)
	I0531 17:56:17.822188  243743 kic.go:188] duration metric: took 7.407788 seconds to extract preloaded images to volume
	W0531 17:56:17.822300  243743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:56:17.822377  243743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:56:17.917371  243743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220531175604-6903 --name embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --network embed-certs-20220531175604-6903 --ip 192.168.49.2 --volume embed-certs-20220531175604-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:56:18.309484  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Running}}
	I0531 17:56:18.345151  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.377155  243743 cli_runner.go:164] Run: docker exec embed-certs-20220531175604-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:56:18.433758  243743 oci.go:247] the created container "embed-certs-20220531175604-6903" has a running status.
	I0531 17:56:18.433787  243743 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa...
	I0531 17:56:18.651045  243743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:56:18.737122  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.772057  243743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:56:18.772085  243743 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220531175604-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:56:18.848259  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.882465  243743 machine.go:88] provisioning docker machine ...
	I0531 17:56:18.882498  243743 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 17:56:18.882541  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:18.915976  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:18.916173  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:18.916203  243743 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 17:56:19.035081  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 17:56:19.035195  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.067189  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:19.067362  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:19.067394  243743 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:56:19.174458  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:56:19.174496  243743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:56:19.174513  243743 ubuntu.go:177] setting up certificates
	I0531 17:56:19.174522  243743 provision.go:83] configureAuth start
	I0531 17:56:19.174563  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.206506  243743 provision.go:138] copyHostCerts
	I0531 17:56:19.206555  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:56:19.206563  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:56:19.206631  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:56:19.206727  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:56:19.206748  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:56:19.206785  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:56:19.206926  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:56:19.206955  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:56:19.206994  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:56:19.207074  243743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 17:56:19.354064  243743 provision.go:172] copyRemoteCerts
	I0531 17:56:19.354118  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:56:19.354167  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.385431  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.465829  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:56:19.482372  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 17:56:19.498469  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 17:56:19.514414  243743 provision.go:86] duration metric: configureAuth took 339.882889ms
	I0531 17:56:19.514440  243743 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:56:19.514580  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:19.514593  243743 machine.go:91] provisioned docker machine in 632.108026ms
	I0531 17:56:19.514598  243743 client.go:171] LocalClient.Create took 14.394605814s
	I0531 17:56:19.514618  243743 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220531175604-6903" took 14.394651417s
	I0531 17:56:19.514628  243743 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:19.514633  243743 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:56:19.514668  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:56:19.514710  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.545303  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.630150  243743 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:56:19.632669  243743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:56:19.632694  243743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:56:19.632704  243743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:56:19.632709  243743 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:56:19.632717  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:56:19.632765  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:56:19.632826  243743 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:56:19.632902  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:56:19.639100  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:19.655411  243743 start.go:309] post-start completed in 140.773803ms
	I0531 17:56:19.655734  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.685847  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:19.686049  243743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:56:19.686083  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.714435  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.795018  243743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:56:19.798696  243743 start.go:134] duration metric: createHost completed in 14.681876121s
	I0531 17:56:19.798719  243743 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 14.682014704s
	I0531 17:56:19.798794  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.828548  243743 ssh_runner.go:195] Run: systemctl --version
	I0531 17:56:19.828596  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.828647  243743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:56:19.828701  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.859164  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.861276  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.956313  243743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:56:19.965586  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:56:19.973861  243743 docker.go:187] disabling docker service ...
	I0531 17:56:19.973905  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:56:19.988442  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:56:19.996566  243743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:56:20.078652  243743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:56:20.157370  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:56:20.165848  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:56:20.177829  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:56:20.190831  243743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:56:20.196751  243743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:56:20.202752  243743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:56:20.278651  243743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:56:20.337554  243743 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:56:20.337615  243743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:56:20.341007  243743 start.go:468] Will wait 60s for crictl version
	I0531 17:56:20.341061  243743 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:56:20.365853  243743 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:56:20.365913  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.392463  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.420160  243743 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:56:20.421445  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:20.449939  243743 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 17:56:20.452980  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.464025  243743 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:56:20.545068  242818 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:20.545189  242818 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 91.113µs
	I0531 17:56:20.545218  242818 start.go:94] Skipping create...Using existing machine configuration
	I0531 17:56:20.545229  242818 fix.go:55] fixHost starting: 
	I0531 17:56:20.545535  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.578422  242818 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state= err=<nil>
	I0531 17:56:20.578447  242818 fix.go:108] machineExists: false. err=machine does not exist
	I0531 17:56:20.580567  242818 out.go:177] * docker "newest-cni-20220531175602-6903" container is missing, will recreate.
	I0531 17:56:20.465273  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:20.465321  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.488641  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.488661  243743 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:56:20.488697  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.509499  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.509516  243743 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:56:20.509549  243743 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:56:20.531616  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:20.531635  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:20.531651  243743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:56:20.531662  243743 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:56:20.531784  243743 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:56:20.531857  243743 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 17:56:20.531896  243743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:56:20.538217  243743 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:56:20.538272  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:56:20.544669  243743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 17:56:20.559115  243743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:56:20.572041  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 17:56:20.584477  243743 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:56:20.587199  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.595757  243743 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 17:56:20.595849  243743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:56:20.595883  243743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:56:20.595923  243743 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 17:56:20.595935  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt with IP's: []
	I0531 17:56:20.865002  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt ...
	I0531 17:56:20.865034  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt: {Name:mk983f1351054e3a81162f051295cd0c506fcbd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865199  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key ...
	I0531 17:56:20.865212  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key: {Name:mk405f0b57d526c28409acadcba4d956d1f0d13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865300  243743 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 17:56:20.865315  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:56:21.075006  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 ...
	I0531 17:56:21.075031  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2: {Name:mk45ad740db68b95692d916b33d8e02d8dba1ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075247  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 ...
	I0531 17:56:21.075264  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2: {Name:mkf42168f7ee31852f4a02d1ef506d7d5a8f7b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075382  243743 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt
	I0531 17:56:21.075465  243743 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key
	I0531 17:56:21.075522  243743 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 17:56:21.075541  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt with IP's: []
	I0531 17:56:21.134487  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt ...
	I0531 17:56:21.134511  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt: {Name:mk1893a7aa78a1763283fdce57e297466ab59148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134682  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key ...
	I0531 17:56:21.134698  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key: {Name:mk25a32f5f241a2369c40634bda3a1e4c75a34a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134919  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:56:21.134955  243743 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:56:21.134967  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:56:21.134989  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:56:21.135011  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:56:21.135032  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:56:21.135067  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:21.135606  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:56:21.153391  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:56:21.169555  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:56:21.185845  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 17:56:21.201921  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:56:21.217951  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:56:21.234159  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:56:21.250138  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:56:21.266294  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:56:21.282188  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:56:21.298126  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:56:21.313822  243743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:56:21.325312  243743 ssh_runner.go:195] Run: openssl version
	I0531 17:56:21.329803  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:56:21.336618  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339408  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339449  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.343890  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:56:21.350793  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:56:21.357753  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360608  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360649  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.365192  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:56:21.372353  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:56:21.379136  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382027  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382074  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.386461  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:56:21.393159  243743 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:21.393235  243743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:56:21.393282  243743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:56:21.417667  243743 cri.go:87] found id: ""
	I0531 17:56:21.417716  243743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:56:21.424169  243743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:56:21.430469  243743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:56:21.430524  243743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:56:21.436945  243743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:56:21.436981  243743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:56:20.581860  242818 delete.go:124] DEMOLISHING newest-cni-20220531175602-6903 ...
	I0531 17:56:20.581935  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.611874  242818 stop.go:79] host is in state 
	I0531 17:56:20.611927  242818 main.go:134] libmachine: Stopping "newest-cni-20220531175602-6903"...
	I0531 17:56:20.611984  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.641012  242818 kic_runner.go:93] Run: systemctl --version
	I0531 17:56:20.641037  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 systemctl --version]
	I0531 17:56:20.671812  242818 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 17:56:20.671834  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo service kubelet stop]
	I0531 17:56:20.701522  242818 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	
	** /stderr **
	W0531 17:56:20.701538  242818 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.701588  242818 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 17:56:20.701603  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo service kubelet stop]
	I0531 17:56:20.731170  242818 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	
	** /stderr **
	W0531 17:56:20.731205  242818 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.731224  242818 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0531 17:56:20.731280  242818 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0531 17:56:20.731291  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0531 17:56:20.760003  242818 kic.go:452] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.760024  242818 kic.go:462] successfully stopped kubernetes!
	I0531 17:56:20.760059  242818 kic_runner.go:93] Run: pgrep kube-apiserver
	I0531 17:56:20.760069  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 pgrep kube-apiserver]
	I0531 17:56:20.819896  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.227821  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:22.727263  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:21.067679  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:23.568416  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:21.673625  243743 out.go:204]   - Generating certificates and keys ...
	I0531 17:56:24.424606  243743 out.go:204]   - Booting up control plane ...
	I0531 17:56:23.853003  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:26.887653  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:24.727660  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:27.227033  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:26.067591  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:28.067620  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:29.919576  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:29.227948  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:31.228307  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:33.727689  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:30.067718  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:32.068461  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:35.960123  243743 out.go:204]   - Configuring RBAC rules ...
	I0531 17:56:36.372484  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:36.372507  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:36.374314  243743 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 17:56:32.961228  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:35.995237  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:35.727884  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:38.227543  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:34.568127  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:37.067720  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:39.067893  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:36.375646  243743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 17:56:36.378972  243743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:56:36.378987  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 17:56:36.391818  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:56:37.134192  243743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:56:37.134265  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.134293  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.216998  243743 ops.go:34] apiserver oom_adj: -16
	I0531 17:56:37.217013  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.772320  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.272340  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.772405  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.271965  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.035267  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:42.067914  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:40.727726  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:43.226875  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:41.568146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:43.568231  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:39.772232  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.272312  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.772850  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.271928  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.772325  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.272975  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.772307  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.272387  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.772248  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:44.272332  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.101303  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:45.227722  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:47.726708  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:46.067788  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:48.068015  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:44.771976  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.272400  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.772386  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.272935  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.772442  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.272553  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.771899  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.272016  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.772018  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.272518  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.772710  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.826843  243743 kubeadm.go:1045] duration metric: took 12.692623892s to wait for elevateKubeSystemPrivileges.
	I0531 17:56:49.826874  243743 kubeadm.go:397] StartCluster complete in 28.433719659s
	I0531 17:56:49.826894  243743 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:49.826995  243743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:49.829203  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:50.344768  243743 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 17:56:50.344838  243743 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:50.346548  243743 out.go:177] * Verifying Kubernetes components...
	I0531 17:56:50.344908  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:56:50.345125  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:50.345145  243743 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:56:50.347978  243743 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348002  243743 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.348010  243743 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:56:50.348024  243743 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348045  243743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 17:56:50.348060  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.347983  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:56:50.348416  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.348631  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.389959  243743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:56:50.391301  243743 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.391319  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:56:50.391356  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.404254  243743 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.404284  243743 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:56:50.404312  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.404824  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.426361  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.436141  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:56:50.437624  243743 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 17:56:50.442307  243743 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.442324  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:56:50.442358  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.480148  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.522400  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.716771  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.801813  243743 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 17:56:50.954578  243743 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 17:56:48.133829  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:51.167150  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:49.727537  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:52.227351  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:50.068503  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:52.568352  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:50.955732  243743 addons.go:417] enableAddons completed in 610.588239ms
	I0531 17:56:52.444126  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:54.200343  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:57.235252  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:54.727443  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:57.227430  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:55.068307  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:57.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:54.943796  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:57.443053  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:59.443687  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:00.275305  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:59.726872  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:01.727580  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:03.727723  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:00.068172  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:02.568206  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:01.943080  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:03.944340  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:03.307575  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:06.343271  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:06.226943  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:08.227336  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:05.067673  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:07.067801  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:06.443747  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:08.942997  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:09.378462  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:12.412507  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:10.227620  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:12.228117  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:09.567953  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:12.067156  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:14.068074  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:10.943575  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:13.443323  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:15.447289  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:14.726998  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:16.727759  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:16.567424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:18.567581  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:15.443608  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:17.944142  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:18.480515  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:21.512524  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:19.226771  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:21.227348  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:23.227471  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:20.567641  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:22.567732  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:20.443463  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:22.443654  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:24.444191  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:24.547268  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:27.579950  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:25.227788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:27.726749  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:24.568096  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:26.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:28.568424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:26.943262  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:28.943501  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:30.614807  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:29.728837  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:32.227880  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:31.068035  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:33.568490  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:30.943907  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:32.944301  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:33.647113  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:36.680389  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:34.727064  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:37.226651  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:36.067248  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:38.068153  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:35.443484  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:37.943999  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:39.713424  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:39.226899  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:41.227775  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:43.227898  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:40.567678  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:42.567872  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:40.443245  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:42.444152  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:42.747612  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:45.781754  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:45.727788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:48.227436  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:44.568143  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:47.068158  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:44.944382  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:47.443968  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:49.445784  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:48.815262  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:51.847719  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:50.727048  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:52.727666  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:49.568530  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:52.067793  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:54.068016  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:51.944333  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:54.443331  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:54.881323  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:54.727760  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:57.227431  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:56.567511  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:58.567932  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:56.443483  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:58.942972  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:57.915359  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:00.949899  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:59.727047  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:01.727196  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:00.568483  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:03.068327  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:00.944139  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:03.443519  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:03.982733  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:07.016348  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:04.226687  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:06.227052  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:08.227935  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:05.566754  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:07.567450  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:05.943200  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:07.943995  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:10.053975  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:10.726683  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:12.727527  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:13.729365  230185 node_ready.go:38] duration metric: took 4m0.008516004s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 17:58:13.731613  230185 out.go:177] 
	W0531 17:58:13.733108  230185 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 17:58:13.733126  230185 out.go:239] * 
	W0531 17:58:13.733818  230185 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:58:13.735217  230185 out.go:177] 
	I0531 17:58:09.568294  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:12.067392  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:14.068088  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7b4a921aa6a00       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   df6575866fdea
	2cf4809512c1b       6de166512aa22       3 minutes ago        Exited              kindnet-cni               0                   df6575866fdea
	b12fda9e12e52       4c03754524064       4 minutes ago        Running             kube-proxy                0                   988de4837f61f
	91afec248cd26       595f327f224a4       4 minutes ago        Running             kube-scheduler            0                   8ae5c296424b2
	2d2cb82735b88       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   0                   ac6eb1a1a0685
	0d1755990bfb1       8fa62c12256df       4 minutes ago        Running             kube-apiserver            0                   dc54f5b9ebd0e
	c25ff47b27774       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   d852e12f002ef
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 17:58:14 UTC. --
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.432547988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724 pid=2136 runtime=io.containerd.runc.v2
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.495337560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8szbz,Uid:e7e66d9f-358e-4d5f-b12d-541da7f43741,Namespace:kube-system,Attempt:0,} returns sandbox id \"988de4837f61f2aa38b2d77788717f111997c7e5144c7abb16bc9c48a61fb618\""
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.498294143Z" level=info msg="CreateContainer within sandbox \"988de4837f61f2aa38b2d77788717f111997c7e5144c7abb16bc9c48a61fb618\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.511781299Z" level=info msg="CreateContainer within sandbox \"988de4837f61f2aa38b2d77788717f111997c7e5144c7abb16bc9c48a61fb618\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e\""
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.512443064Z" level=info msg="StartContainer for \"b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e\""
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.568886749Z" level=info msg="StartContainer for \"b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e\" returns successfully"
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.705921754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-n856k,Uid:1bf232e0-3302-4413-8693-378d7bcc2bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\""
	May 31 17:54:13 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:13.708727707Z" level=info msg="PullImage \"kindest/kindnetd:v20210326-1e038dc5\""
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.256040971Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20210326-1e038dc5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.257981399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.259769306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20210326-1e038dc5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.261518386Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.261924725Z" level=info msg="PullImage \"kindest/kindnetd:v20210326-1e038dc5\" returns image reference \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\""
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.263725334Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.275492615Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\""
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.275889392Z" level=info msg="StartContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\""
	May 31 17:54:18 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:54:18.405438081Z" level=info msg="StartContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\" returns successfully"
	May 31 17:56:58 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:58.642954328Z" level=info msg="shim disconnected" id=2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee
	May 31 17:56:58 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:58.643018520Z" level=warning msg="cleaning up after shim disconnected" id=2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee namespace=k8s.io
	May 31 17:56:58 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:58.643034193Z" level=info msg="cleaning up dead shim"
	May 31 17:56:58 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:58.651592323Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:56:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2547 runtime=io.containerd.runc.v2\n"
	May 31 17:56:59 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:59.204880182Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 17:56:59 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:59.217550116Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\""
	May 31 17:56:59 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:59.218025815Z" level=info msg="StartContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\""
	May 31 17:56:59 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:56:59.316679224Z" level=info msg="StartContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531175323-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531175323-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531175323-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_54_00_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:53:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531175323-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 17:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 17:54:35 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 17:54:35 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 17:54:35 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 17:54:35 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220531175323-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                3f650030-6900-444d-b03b-802678a62df1
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220531175323-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-n856k                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-20220531175323-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-no-preload-20220531175323-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-8szbz                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-20220531175323-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66] <==
	* {"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220531175323-6903 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.710Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:55:16.484Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.714125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:55:16.484Z","caller":"traceutil/trace.go:171","msg":"trace[628559206] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:490; }","duration":"114.829832ms","start":"2022-05-31T17:55:16.369Z","end":"2022-05-31T17:55:16.484Z","steps":["trace[628559206] 'range keys from in-memory index tree'  (duration: 101.483588ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:56:07.995Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"269.683342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:07.995Z","caller":"traceutil/trace.go:171","msg":"trace[1668449359] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:502; }","duration":"269.788171ms","start":"2022-05-31T17:56:07.725Z","end":"2022-05-31T17:56:07.995Z","steps":["trace[1668449359] 'range keys from in-memory index tree'  (duration: 269.568906ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:08.636Z","caller":"traceutil/trace.go:171","msg":"trace[630377434] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"212.121446ms","start":"2022-05-31T17:56:08.424Z","end":"2022-05-31T17:56:08.636Z","steps":["trace[630377434] 'process raft request'  (duration: 175.233347ms)","trace[630377434] 'compare'  (duration: 36.797114ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:13.831Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.273396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:13.831Z","caller":"traceutil/trace.go:171","msg":"trace[1174066299] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:503; }","duration":"105.354475ms","start":"2022-05-31T17:56:13.726Z","end":"2022-05-31T17:56:13.831Z","steps":["trace[1174066299] 'range keys from in-memory index tree'  (duration: 105.148687ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:58:14 up  1:40,  0 users,  load average: 0.36, 1.09, 1.65
	Linux no-preload-20220531175323-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509] <==
	* I0531 17:53:57.037883       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:53:57.037926       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:53:57.040098       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:53:57.101460       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:53:57.101814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:53:57.101904       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:53:57.936377       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:53:57.936399       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:53:57.941719       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:53:57.944313       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:53:57.944333       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:53:58.331802       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:53:58.360400       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:53:58.421652       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:53:58.426532       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0531 17:53:58.427280       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:53:58.430236       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:53:59.065574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:53:59.723054       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:53:59.729203       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:53:59.737186       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:54:04.817265       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:54:12.817514       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:54:12.904631       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:54:13.634581       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d] <==
	* I0531 17:54:12.902059       1 shared_informer.go:247] Caches are synced for node 
	I0531 17:54:12.902089       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 17:54:12.902093       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0531 17:54:12.902100       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 17:54:12.902118       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0531 17:54:12.902170       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0531 17:54:12.902710       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:54:12.902766       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0531 17:54:12.902864       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:54:12.903597       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ndl5c"
	I0531 17:54:12.916498       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8cptk"
	I0531 17:54:12.918139       1 range_allocator.go:374] Set node no-preload-20220531175323-6903 PodCIDR to [10.244.0.0/24]
	I0531 17:54:12.924160       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n856k"
	I0531 17:54:12.924335       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8szbz"
	I0531 17:54:13.002190       1 shared_informer.go:247] Caches are synced for cronjob 
	I0531 17:54:13.101463       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:54:13.101545       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.101587       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.111547       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:54:13.111574       1 disruption.go:371] Sending events to api server.
	I0531 17:54:13.138785       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:54:13.145126       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ndl5c"
	I0531 17:54:13.511066       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511116       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e] <==
	* I0531 17:54:13.606562       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 17:54:13.606652       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 17:54:13.606701       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:54:13.631765       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:54:13.631796       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:54:13.631804       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:54:13.631825       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:54:13.632185       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:54:13.632671       1 config.go:317] "Starting service config controller"
	I0531 17:54:13.632688       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:54:13.632706       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:54:13.632709       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:54:13.735397       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:54:13.735427       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b] <==
	* W0531 17:53:57.017973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.018249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.018293       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:57.018334       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:57.018340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.018350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:57.866168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.866195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.880204       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.880227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.004410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:58.004458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.020771       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:53:58.020798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:53:58.044767       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:53:58.044798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:53:58.102366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:53:58.102398       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:53:58.102392       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:53:58.102427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:53:58.155068       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:58.155107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:58.202503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:53:58.202555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 17:53:58.514561       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 17:58:15 UTC. --
	May 31 17:56:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:15.038981    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:20 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:20.040428    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:25 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:25.041049    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:30 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:30.041833    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:35 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:35.043624    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:40 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:40.044349    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:45 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:45.045272    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:50.046052    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:56:55.047028    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:56:59 no-preload-20220531175323-6903 kubelet[1738]: I0531 17:56:59.200082    1738 scope.go:110] "RemoveContainer" containerID="2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee"
	May 31 17:57:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:00.048248    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:05.048761    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:10.050299    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:15.051750    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:20 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:20.052728    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:25 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:25.053569    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:30 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:30.055034    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:35 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:35.056470    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:40 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:40.057382    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:45 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:45.058924    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:50.059803    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:57:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:57:55.060960    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:58:00.062051    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:58:05.062857    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 17:58:10.063786    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-8cptk storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-8cptk storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-8cptk storage-provisioner: exit status 1 (50.84465ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-8cptk" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-8cptk storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (291.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (284.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0531 17:55:15.863172    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 17:55:16.126499    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (4m42.41456763s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:55:09.224912  237733 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:55:09.225066  237733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:55:09.225075  237733 out.go:309] Setting ErrFile to fd 2...
	I0531 17:55:09.225080  237733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:55:09.225202  237733 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:55:09.225479  237733 out.go:303] Setting JSON to false
	I0531 17:55:09.226972  237733 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5860,"bootTime":1654013849,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:55:09.227047  237733 start.go:125] virtualization: kvm guest
	I0531 17:55:09.229948  237733 out.go:177] * [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:55:09.231483  237733 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:55:09.231459  237733 notify.go:193] Checking for updates...
	I0531 17:55:09.232852  237733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:55:09.234293  237733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:55:09.235654  237733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:55:09.237103  237733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:55:09.238611  237733 config.go:178] Loaded profile config "bridge-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:55:09.238718  237733 config.go:178] Loaded profile config "enable-default-cni-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:55:09.238832  237733 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:55:09.238878  237733 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:55:09.276689  237733 docker.go:137] docker version: linux-20.10.16
	I0531 17:55:09.276762  237733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:55:09.373480  237733 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-05-31 17:55:09.303963677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:55:09.373977  237733 docker.go:254] overlay module found
	I0531 17:55:09.376211  237733 out.go:177] * Using the docker driver based on user configuration
	I0531 17:55:09.377420  237733 start.go:284] selected driver: docker
	I0531 17:55:09.377431  237733 start.go:806] validating driver "docker" against <nil>
	I0531 17:55:09.377448  237733 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:55:09.378274  237733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:55:09.476318  237733 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-05-31 17:55:09.406456056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:55:09.476432  237733 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:55:09.476582  237733 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:55:09.478513  237733 out.go:177] * Using Docker driver with the root privilege
	I0531 17:55:09.479596  237733 cni.go:95] Creating CNI manager for ""
	I0531 17:55:09.479614  237733 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:55:09.479629  237733 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:55:09.479638  237733 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:55:09.479644  237733 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:55:09.479654  237733 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:55:09.481242  237733 out.go:177] * Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	I0531 17:55:09.482596  237733 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:55:09.483912  237733 out.go:177] * Pulling base image ...
	I0531 17:55:09.485152  237733 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:55:09.485185  237733 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 17:55:09.485195  237733 cache.go:57] Caching tarball of preloaded images
	I0531 17:55:09.485240  237733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:55:09.485368  237733 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 17:55:09.485387  237733 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 17:55:09.485490  237733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 17:55:09.485516  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json: {Name:mkd12ff0da87da8ca7dc977c96ce5f82ac0cd28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:09.531456  237733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:55:09.531480  237733 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:55:09.531489  237733 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:55:09.531525  237733 start.go:352] acquiring machines lock for default-k8s-different-port-20220531175509-6903: {Name:mk53f02aa9701786e51ee0c8a5d73dcf46801d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:55:09.531642  237733 start.go:356] acquired machines lock for "default-k8s-different-port-20220531175509-6903" in 95.578µs
	I0531 17:55:09.531672  237733 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:55:09.531767  237733 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:55:09.534043  237733 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:55:09.534255  237733 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220531175509-6903" (driver="docker")
	I0531 17:55:09.534292  237733 client.go:168] LocalClient.Create starting
	I0531 17:55:09.534355  237733 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:55:09.534395  237733 main.go:134] libmachine: Decoding PEM data...
	I0531 17:55:09.534415  237733 main.go:134] libmachine: Parsing certificate...
	I0531 17:55:09.534480  237733 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:55:09.534505  237733 main.go:134] libmachine: Decoding PEM data...
	I0531 17:55:09.534540  237733 main.go:134] libmachine: Parsing certificate...
	I0531 17:55:09.534863  237733 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:55:09.565105  237733 cli_runner.go:211] docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:55:09.565177  237733 network_create.go:272] running [docker network inspect default-k8s-different-port-20220531175509-6903] to gather additional debugging logs...
	I0531 17:55:09.565201  237733 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903
	W0531 17:55:09.594965  237733 cli_runner.go:211] docker network inspect default-k8s-different-port-20220531175509-6903 returned with exit code 1
	I0531 17:55:09.594989  237733 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220531175509-6903]: docker network inspect default-k8s-different-port-20220531175509-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220531175509-6903
	I0531 17:55:09.595002  237733 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220531175509-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220531175509-6903
	
	** /stderr **
	I0531 17:55:09.595037  237733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:55:09.625646  237733 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-09a226de47ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e6:98:b2:7a}}
	I0531 17:55:09.626210  237733 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-ffba0413ceee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c8:ab:51:43}}
	I0531 17:55:09.626803  237733 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-b2391a84ebd8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:fa:6a:5d:90}}
	I0531 17:55:09.627438  237733 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc00070e1d0] misses:0}
	I0531 17:55:09.627470  237733 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:55:09.627482  237733 network_create.go:115] attempt to create docker network default-k8s-different-port-20220531175509-6903 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0531 17:55:09.627527  237733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220531175509-6903
	I0531 17:55:09.691413  237733 network_create.go:99] docker network default-k8s-different-port-20220531175509-6903 192.168.76.0/24 created
	I0531 17:55:09.691441  237733 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-different-port-20220531175509-6903" container
	I0531 17:55:09.691494  237733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:55:09.723052  237733 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220531175509-6903 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220531175509-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:55:09.753883  237733 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220531175509-6903
	I0531 17:55:09.753963  237733 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220531175509-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220531175509-6903 --entrypoint /usr/bin/test -v default-k8s-different-port-20220531175509-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:55:10.283397  237733 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220531175509-6903
	I0531 17:55:10.283448  237733 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:55:10.283469  237733 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:55:10.283532  237733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220531175509-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:55:17.676777  237733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220531175509-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.393157624s)
	I0531 17:55:17.676805  237733 kic.go:188] duration metric: took 7.393333 seconds to extract preloaded images to volume
	W0531 17:55:17.676927  237733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:55:17.677035  237733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:55:17.775970  237733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220531175509-6903 --name default-k8s-different-port-20220531175509-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220531175509-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220531175509-6903 --network default-k8s-different-port-20220531175509-6903 --ip 192.168.76.2 --volume default-k8s-different-port-20220531175509-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb85
75418
	I0531 17:55:18.166996  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Running}}
	I0531 17:55:18.202090  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:18.235616  237733 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220531175509-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:55:18.294679  237733 oci.go:247] the created container "default-k8s-different-port-20220531175509-6903" has a running status.
	I0531 17:55:18.294726  237733 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa...
	I0531 17:55:18.380741  237733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:55:18.470337  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:18.505653  237733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:55:18.505680  237733 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220531175509-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:55:18.595341  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:18.628971  237733 machine.go:88] provisioning docker machine ...
	I0531 17:55:18.629007  237733 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531175509-6903"
	I0531 17:55:18.629060  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:18.663710  237733 main.go:134] libmachine: Using SSH client type: native
	I0531 17:55:18.663874  237733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0531 17:55:18.663898  237733 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531175509-6903 && echo "default-k8s-different-port-20220531175509-6903" | sudo tee /etc/hostname
	I0531 17:55:18.784344  237733 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531175509-6903
	
	I0531 17:55:18.784420  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:18.815819  237733 main.go:134] libmachine: Using SSH client type: native
	I0531 17:55:18.815998  237733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0531 17:55:18.816031  237733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531175509-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531175509-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531175509-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:55:18.922531  237733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:55:18.922559  237733 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:55:18.922593  237733 ubuntu.go:177] setting up certificates
	I0531 17:55:18.922608  237733 provision.go:83] configureAuth start
	I0531 17:55:18.922659  237733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 17:55:18.953544  237733 provision.go:138] copyHostCerts
	I0531 17:55:18.953592  237733 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:55:18.953614  237733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:55:18.953681  237733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:55:18.953755  237733 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:55:18.953766  237733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:55:18.953788  237733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:55:18.953830  237733 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:55:18.953838  237733 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:55:18.953856  237733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:55:18.953896  237733 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531175509-6903 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531175509-6903]
	I0531 17:55:19.297761  237733 provision.go:172] copyRemoteCerts
	I0531 17:55:19.297818  237733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:55:19.297846  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.329064  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:19.410298  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:55:19.428136  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 17:55:19.444277  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 17:55:19.460613  237733 provision.go:86] duration metric: configureAuth took 537.995407ms
	I0531 17:55:19.460648  237733 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:55:19.460815  237733 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:55:19.460830  237733 machine.go:91] provisioned docker machine in 831.838321ms
	I0531 17:55:19.460836  237733 client.go:171] LocalClient.Create took 9.926533983s
	I0531 17:55:19.460850  237733 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220531175509-6903" took 9.926596154s
	I0531 17:55:19.460860  237733 start.go:306] post-start starting for "default-k8s-different-port-20220531175509-6903" (driver="docker")
	I0531 17:55:19.460865  237733 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:55:19.460901  237733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:55:19.460931  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.492560  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:19.574154  237733 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:55:19.576677  237733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:55:19.576702  237733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:55:19.576713  237733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:55:19.576718  237733 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:55:19.576727  237733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:55:19.576771  237733 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:55:19.576868  237733 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:55:19.576960  237733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:55:19.583206  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:55:19.599695  237733 start.go:309] post-start completed in 138.827014ms
	I0531 17:55:19.600008  237733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.631526  237733 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 17:55:19.631765  237733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:55:19.631819  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.662234  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:19.743199  237733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:55:19.746927  237733 start.go:134] duration metric: createHost completed in 10.215147308s
	I0531 17:55:19.746952  237733 start.go:81] releasing machines lock for "default-k8s-different-port-20220531175509-6903", held for 10.215296024s
	I0531 17:55:19.747030  237733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.777506  237733 ssh_runner.go:195] Run: systemctl --version
	I0531 17:55:19.777562  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.777580  237733 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:55:19.777628  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:19.812504  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:19.813297  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:19.890967  237733 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:55:19.911913  237733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:55:19.920421  237733 docker.go:187] disabling docker service ...
	I0531 17:55:19.920468  237733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:55:19.935722  237733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:55:19.944243  237733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:55:20.020202  237733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:55:20.096840  237733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:55:20.105779  237733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:55:20.118474  237733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:55:20.130700  237733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:55:20.136636  237733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:55:20.142765  237733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:55:20.213539  237733 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:55:20.273839  237733 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:55:20.273900  237733 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:55:20.277709  237733 start.go:468] Will wait 60s for crictl version
	I0531 17:55:20.277761  237733 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:55:20.303492  237733 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:55:20.303535  237733 ssh_runner.go:195] Run: containerd --version
	I0531 17:55:20.330049  237733 ssh_runner.go:195] Run: containerd --version
	I0531 17:55:20.359487  237733 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:55:20.360939  237733 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:55:20.394005  237733 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0531 17:55:20.397245  237733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:55:20.408277  237733 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:55:20.409485  237733 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:55:20.409537  237733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:55:20.432265  237733 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:55:20.432284  237733 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:55:20.432323  237733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:55:20.454601  237733 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:55:20.454628  237733 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:55:20.454694  237733 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:55:20.476514  237733 cni.go:95] Creating CNI manager for ""
	I0531 17:55:20.476535  237733 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:55:20.476549  237733 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:55:20.476567  237733 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531175509-6903 NodeName:default-k8s-different-port-20220531175509-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:55:20.476719  237733 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220531175509-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:55:20.476816  237733 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220531175509-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 17:55:20.476870  237733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:55:20.483752  237733 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:55:20.483810  237733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:55:20.490008  237733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0531 17:55:20.501929  237733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:55:20.513763  237733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0531 17:55:20.525373  237733 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:55:20.528000  237733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:55:20.536318  237733 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903 for IP: 192.168.76.2
	I0531 17:55:20.536421  237733 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:55:20.536472  237733 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:55:20.536535  237733 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key
	I0531 17:55:20.536553  237733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.crt with IP's: []
	I0531 17:55:20.875005  237733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.crt ...
	I0531 17:55:20.875032  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.crt: {Name:mkf4ba2a4cd2475ac03a6416a068048633f7d698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:20.875218  237733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key ...
	I0531 17:55:20.875231  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key: {Name:mkdcdcb90c91d5f2247e4f89e0c48374f61998da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:20.875316  237733 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25
	I0531 17:55:20.875332  237733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:55:21.015222  237733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt.31bdca25 ...
	I0531 17:55:21.015253  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt.31bdca25: {Name:mkc7351a9f6cfc2bb498e4065df01365462d4cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:21.015427  237733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25 ...
	I0531 17:55:21.015443  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25: {Name:mk1ef1b203af7653a1581d713934777b982510fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:21.015524  237733 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt
	I0531 17:55:21.015577  237733 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key
	I0531 17:55:21.015620  237733 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key
	I0531 17:55:21.015639  237733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt with IP's: []
	I0531 17:55:21.330805  237733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt ...
	I0531 17:55:21.330834  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt: {Name:mkc9222cca3ea6f8b41ba701a9d1034097d0d362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:21.331009  237733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key ...
	I0531 17:55:21.331023  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key: {Name:mk19065af0e06c6a3e55b80b6e85f6685c16730d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:21.331212  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:55:21.331247  237733 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:55:21.331259  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:55:21.331283  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:55:21.331305  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:55:21.331332  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:55:21.331376  237733 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:55:21.331917  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:55:21.349880  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 17:55:21.366521  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:55:21.383279  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 17:55:21.399302  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:55:21.415736  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:55:21.431825  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:55:21.448273  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:55:21.463980  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:55:21.480131  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:55:21.496054  237733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:55:21.512179  237733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:55:21.523784  237733 ssh_runner.go:195] Run: openssl version
	I0531 17:55:21.528079  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:55:21.534503  237733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:55:21.537304  237733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:55:21.537335  237733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:55:21.541746  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:55:21.548247  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:55:21.554754  237733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:55:21.557645  237733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:55:21.557692  237733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:55:21.562409  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:55:21.568997  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:55:21.575641  237733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:55:21.578412  237733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:55:21.578446  237733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:55:21.582859  237733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:55:21.589535  237733 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:55:21.589610  237733 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:55:21.589652  237733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:55:21.613045  237733 cri.go:87] found id: ""
	I0531 17:55:21.613103  237733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:55:21.619652  237733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:55:21.626310  237733 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:55:21.626349  237733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:55:21.632581  237733 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:55:21.632611  237733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:55:21.877528  237733 out.go:204]   - Generating certificates and keys ...
	I0531 17:55:24.642104  237733 out.go:204]   - Booting up control plane ...
	I0531 17:55:36.683829  237733 out.go:204]   - Configuring RBAC rules ...
	I0531 17:55:37.096151  237733 cni.go:95] Creating CNI manager for ""
	I0531 17:55:37.096177  237733 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:55:37.097964  237733 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 17:55:37.099325  237733 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 17:55:37.102744  237733 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:55:37.102763  237733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 17:55:37.115902  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:55:37.857490  237733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:55:37.857535  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:37.857603  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T17_55_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:37.912874  237733 ops.go:34] apiserver oom_adj: -16
	I0531 17:55:37.912892  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:38.466571  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:38.966088  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:39.466656  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:39.966094  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:40.466024  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:40.966186  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:41.466936  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:41.966911  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:42.466948  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:42.966437  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:43.466013  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:43.966333  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:44.466295  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:44.966842  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:45.466099  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:45.966571  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:46.465966  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:46.966278  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:47.465993  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:47.966364  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:48.466091  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:48.966931  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:49.467031  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:49.966036  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:50.466689  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:50.966369  237733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:55:51.022158  237733 kubeadm.go:1045] duration metric: took 13.164665375s to wait for elevateKubeSystemPrivileges.
	I0531 17:55:51.022187  237733 kubeadm.go:397] StartCluster complete in 29.432659701s
	I0531 17:55:51.022203  237733 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:51.022324  237733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:55:51.024395  237733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:51.539890  237733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 17:55:51.539939  237733 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:55:51.541819  237733 out.go:177] * Verifying Kubernetes components...
	I0531 17:55:51.539997  237733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:55:51.540016  237733 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:55:51.540168  237733 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:55:51.543216  237733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:55:51.543249  237733 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 17:55:51.543267  237733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	I0531 17:55:51.543249  237733 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 17:55:51.543348  237733 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	W0531 17:55:51.543361  237733 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:55:51.543397  237733 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 17:55:51.543648  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:51.543901  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:51.560950  237733 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 17:55:51.589961  237733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:55:51.591623  237733 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:55:51.591647  237733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:55:51.591695  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:51.594231  237733 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 17:55:51.594250  237733 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:55:51.594273  237733 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 17:55:51.594607  237733 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 17:55:51.615404  237733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:55:51.631540  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:51.635323  237733 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:55:51.635344  237733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:55:51.635389  237733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 17:55:51.676334  237733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 17:55:51.816408  237733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:55:51.819916  237733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:55:52.030460  237733 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 17:55:52.409472  237733 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0531 17:55:52.410801  237733 addons.go:417] enableAddons completed in 870.78747ms
	I0531 17:55:53.568188  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:55:56.067399  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:55:58.067698  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:00.068079  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:02.068792  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:04.568475  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:06.568763  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:09.068463  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:11.567556  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:14.067647  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:16.149146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:18.567711  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:21.067679  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:23.568416  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:26.067591  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:28.067620  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:30.067718  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:32.068461  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:34.568127  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:37.067720  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:39.067893  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:41.568146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:43.568231  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:46.067788  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:48.068015  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:50.068503  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:52.568352  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:55.068307  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:57.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:00.068172  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:02.568206  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:05.067673  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:07.067801  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:09.567953  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:12.067156  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:14.068074  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:16.567424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:18.567581  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:20.567641  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:22.567732  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:24.568096  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:26.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:28.568424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:31.068035  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:33.568490  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:36.067248  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:38.068153  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:40.567678  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:42.567872  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:44.568143  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:47.068158  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:49.568530  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:52.067793  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:54.068016  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:56.567511  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:58.567932  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:00.568483  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:03.068327  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:05.566754  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:07.567450  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:09.568294  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:12.067392  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:14.068088  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:16.567914  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:19.067263  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:21.067407  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:23.067542  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:25.068210  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:27.568275  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:30.067941  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:32.068112  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:34.567004  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:36.567570  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:38.568081  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:40.568431  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:42.569999  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:45.067990  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:47.568114  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:49.568309  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:52.067555  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:54.567411  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:57.067999  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:59.068182  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:01.568144  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:04.067808  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:06.567749  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:08.568349  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:11.067636  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:13.067889  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:15.566455  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:17.567601  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:20.067425  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:22.567842  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:24.567885  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:26.568273  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:29.067838  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:31.190088  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:33.568696  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:36.068281  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:38.567893  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:41.067896  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:43.567459  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:46.067986  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:48.568083  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:50.568175  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:51.570285  237733 node_ready.go:38] duration metric: took 4m0.009297795s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 17:59:51.572685  237733 out.go:177] 
	W0531 17:59:51.574080  237733 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 17:59:51.574098  237733 out.go:239] * 
	* 
	W0531 17:59:51.574853  237733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:59:51.576647  237733 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531175509-6903
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531175509-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a",
	        "Created": "2022-05-31T17:55:17.80847266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238395,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:55:18.158165808Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hosts",
	        "LogPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a-json.log",
	        "Name": "/default-k8s-different-port-20220531175509-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531175509-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531175509-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531175509-6903",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531175509-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531175509-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8eaff00a202d06cce1c8d58235602194947fd26c7a48f709899b5f65739bc85",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8eaff00a202",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531175509-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b24400321365",
	                        "default-k8s-different-port-20220531175509-6903"
	                    ],
	                    "NetworkID": "6fc1f79f54eab1e8df36883c8283b483c18aa0e383b30bdb7aa37eb035c0586e",
	                    "EndpointID": "2b828753c599e8680fae2d033551c2f135b67a4addb875098c2181def1415f01",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220531175509-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                  Profile                  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:43 UTC | 31 May 22 17:45 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:45 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| delete  | -p cilium-20220531174030-6903                     | cilium-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:45 UTC |
	| start   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |                |                     |                     |
	|         | --disable-driver-mounts                           |                                           |         |                |                     |                     |
	|         | --keep-context=false                              |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                           |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                           |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:44 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| start   | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:49 UTC |
	|         | --memory=2048                                     |                                           |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                           |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                           |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	| ssh     | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | pgrep -a kubelet                                  |                                           |         |                |                     |                     |
	| logs    | calico-20220531174030-6903                        | calico-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p calico-20220531174030-6903                     | calico-20220531174030-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                | disable-driver-mounts-20220531175323-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903         |                                           |         |                |                     |                     |
	| start   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |                |                     |                     |
	|         | --disable-driver-mounts                           |                                           |         |                |                     |                     |
	|         | --keep-context=false                              |                                           |         |                |                     |                     |
	|         | --driver=docker                                   |                                           |         |                |                     |                     |
	|         | --container-runtime=containerd                    |                                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                           |         |                |                     |                     |
	| ssh     | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                           |         |                |                     |                     |
	| pause   | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| unpause | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                           |         |                |                     |                     |
	| delete  | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	| delete  | -p                                                | old-k8s-version-20220531174534-6903       | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903               |                                           |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903            | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                        | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220531174029-6903    | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903            |                                           |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                     | bridge-20220531174029-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                        |                                           |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:56:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:56:04.761070  243743 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:56:04.761198  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761209  243743 out.go:309] Setting ErrFile to fd 2...
	I0531 17:56:04.761213  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761320  243743 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:56:04.761604  243743 out.go:303] Setting JSON to false
	I0531 17:56:04.763369  243743 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5916,"bootTime":1654013849,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:56:04.763439  243743 start.go:125] virtualization: kvm guest
	I0531 17:56:04.765860  243743 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:56:04.767852  243743 notify.go:193] Checking for updates...
	I0531 17:56:04.767855  243743 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:56:04.769545  243743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:56:04.771229  243743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:04.772729  243743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:56:04.774183  243743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:56:04.776078  243743 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776263  243743 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776404  243743 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776470  243743 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:56:04.818427  243743 docker.go:137] docker version: linux-20.10.16
	I0531 17:56:04.818525  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:04.933426  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.851840173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:04.933610  243743 docker.go:254] overlay module found
	I0531 17:56:04.936012  243743 out.go:177] * Using the docker driver based on user configuration
	I0531 17:56:04.937461  243743 start.go:284] selected driver: docker
	I0531 17:56:04.937479  243743 start.go:806] validating driver "docker" against <nil>
	I0531 17:56:04.937498  243743 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:56:04.938476  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:05.050928  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.970943421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:05.051044  243743 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:56:05.051282  243743 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:56:05.053473  243743 out.go:177] * Using Docker driver with the root privilege
	I0531 17:56:05.054914  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:05.054932  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:05.054948  243743 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054953  243743 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054960  243743 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:56:05.054974  243743 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:05.056598  243743 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 17:56:05.058015  243743 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:56:05.059392  243743 out.go:177] * Pulling base image ...
	I0531 17:56:05.060693  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:05.060727  243743 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:56:05.060733  243743 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 17:56:05.060745  243743 cache.go:57] Caching tarball of preloaded images
	I0531 17:56:05.060946  243743 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 17:56:05.060966  243743 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 17:56:05.061099  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:05.061132  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json: {Name:mk012ae752926ff69a2c9dc59c259dc1c0bd12d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:05.116443  243743 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:56:05.116476  243743 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:56:05.116493  243743 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:56:05.116542  243743 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:05.116688  243743 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 125.335µs
	I0531 17:56:05.116714  243743 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:05.116812  243743 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:56:03.023989  242818 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:56:03.024207  242818 start.go:165] libmachine.API.Create for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 17:56:03.024237  242818 client.go:168] LocalClient.Create starting
	I0531 17:56:03.024313  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:56:03.024346  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:03.024362  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:03.024421  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:56:03.024440  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:03.024449  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:03.024728  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:56:03.055357  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:56:03.055434  242818 network_create.go:272] running [docker network inspect newest-cni-20220531175602-6903] to gather additional debugging logs...
	I0531 17:56:03.055459  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903
	W0531 17:56:03.085005  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:03.085045  242818 network_create.go:275] error running [docker network inspect newest-cni-20220531175602-6903]: docker network inspect newest-cni-20220531175602-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220531175602-6903
	I0531 17:56:03.085059  242818 network_create.go:277] output of [docker network inspect newest-cni-20220531175602-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220531175602-6903
	
	** /stderr **
	I0531 17:56:03.085110  242818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:03.115283  242818 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-09a226de47ed IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e6:98:b2:7a}}
	I0531 17:56:03.115975  242818 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000010158] misses:0}
	I0531 17:56:03.116016  242818 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:56:03.116030  242818 network_create.go:115] attempt to create docker network newest-cni-20220531175602-6903 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 17:56:03.116075  242818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220531175602-6903
	I0531 17:56:03.183655  242818 network_create.go:99] docker network newest-cni-20220531175602-6903 192.168.58.0/24 created
	I0531 17:56:03.183692  242818 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20220531175602-6903" container
	I0531 17:56:03.183783  242818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:56:03.223995  242818 cli_runner.go:164] Run: docker volume create newest-cni-20220531175602-6903 --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:56:03.258196  242818 oci.go:103] Successfully created a docker volume newest-cni-20220531175602-6903
	I0531 17:56:03.258284  242818 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220531175602-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --entrypoint /usr/bin/test -v newest-cni-20220531175602-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:56:04.358045  242818 cli_runner.go:217] Completed: docker run --rm --name newest-cni-20220531175602-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --entrypoint /usr/bin/test -v newest-cni-20220531175602-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (1.099719827s)
	I0531 17:56:04.358080  242818 oci.go:107] Successfully prepared a docker volume newest-cni-20220531175602-6903
	I0531 17:56:04.358118  242818 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:04.358142  242818 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:56:04.358190  242818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:56:05.728379  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:07.996964  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:04.568475  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:06.568763  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:09.068463  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:05.119757  243743 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:56:05.119962  243743 start.go:165] libmachine.API.Create for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:05.119989  243743 client.go:168] LocalClient.Create starting
	I0531 17:56:05.120055  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:56:05.120089  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120110  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120167  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:56:05.120184  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120193  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120480  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:56:05.151452  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:56:05.151507  243743 network_create.go:272] running [docker network inspect embed-certs-20220531175604-6903] to gather additional debugging logs...
	I0531 17:56:05.151528  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903
	W0531 17:56:05.183019  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 returned with exit code 1
	I0531 17:56:05.183050  243743 network_create.go:275] error running [docker network inspect embed-certs-20220531175604-6903]: docker network inspect embed-certs-20220531175604-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220531175604-6903
	I0531 17:56:05.183073  243743 network_create.go:277] output of [docker network inspect embed-certs-20220531175604-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220531175604-6903
	
	** /stderr **
	I0531 17:56:05.183114  243743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:05.218722  243743 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e2d8] misses:0}
	I0531 17:56:05.218771  243743 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:56:05.218788  243743 network_create.go:115] attempt to create docker network embed-certs-20220531175604-6903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 17:56:05.218827  243743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220531175604-6903
	I0531 17:56:05.286551  243743 network_create.go:99] docker network embed-certs-20220531175604-6903 192.168.49.0/24 created
	I0531 17:56:05.286588  243743 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20220531175604-6903" container
	I0531 17:56:05.286654  243743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:56:05.321293  243743 cli_runner.go:164] Run: docker volume create embed-certs-20220531175604-6903 --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:56:05.353388  243743 oci.go:103] Successfully created a docker volume embed-certs-20220531175604-6903
	I0531 17:56:05.353454  243743 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:56:10.010528  242818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (5.652255027s)
	I0531 17:56:10.010581  242818 kic.go:188] duration metric: took 5.652433 seconds to extract preloaded images to volume
	W0531 17:56:10.010767  242818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:56:10.010898  242818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:56:10.133622  242818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	W0531 17:56:10.199092  242818 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 returned with exit code 125
	I0531 17:56:10.199198  242818 client.go:171] LocalClient.Create took 7.174948316s
	I0531 17:56:12.200394  242818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:56:12.200481  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:12.233921  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:12.234061  242818 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:12.510494  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:12.545210  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:12.545331  242818 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:10.228074  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:12.727278  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:11.567556  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:14.067647  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:10.414315  243743 cli_runner.go:217] Completed: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (5.060822572s)
	I0531 17:56:10.414344  243743 oci.go:107] Successfully prepared a docker volume embed-certs-20220531175604-6903
	I0531 17:56:10.414377  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:10.414397  243743 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:56:10.414445  243743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:56:13.086142  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.119793  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:13.119929  242818 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:13.775273  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.805802  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	W0531 17:56:13.805936  242818 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 17:56:13.805957  242818 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:13.806040  242818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:56:13.806090  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:13.835894  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:13.836013  242818 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.068325  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.099888  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.100007  242818 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.545599  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.577377  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.577470  242818 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:14.895935  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:14.927184  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:56:14.927294  242818 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:15.482091  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	W0531 17:56:15.513112  242818 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903 returned with exit code 1
	W0531 17:56:15.513206  242818 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 17:56:15.513221  242818 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 17:56:15.513227  242818 start.go:134] duration metric: createHost completed in 12.491379014s
	I0531 17:56:15.513242  242818 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 12.491483099s
	W0531 17:56:15.513281  242818 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit statu
s 125
	stdout:
	d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a
	
	stderr:
	docker: Error response from daemon: network newest-cni-20220531175602-6903 not found.
	I0531 17:56:15.513666  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	W0531 17:56:15.544733  242818 start.go:604] delete host: Docker machine "newest-cni-20220531175602-6903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0531 17:56:15.544907  242818 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb85
75418: exit status 125
	stdout:
	d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a
	
	stderr:
	docker: Error response from daemon: network newest-cni-20220531175602-6903 not found.
	
	I0531 17:56:15.544924  242818 start.go:614] Will try again in 5 seconds ...
	I0531 17:56:15.227876  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:17.802254  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:16.149146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:18.567711  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:17.822157  243743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.407658192s)
	I0531 17:56:17.822188  243743 kic.go:188] duration metric: took 7.407788 seconds to extract preloaded images to volume
	W0531 17:56:17.822300  243743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:56:17.822377  243743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:56:17.917371  243743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220531175604-6903 --name embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --network embed-certs-20220531175604-6903 --ip 192.168.49.2 --volume embed-certs-20220531175604-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:56:18.309484  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Running}}
	I0531 17:56:18.345151  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.377155  243743 cli_runner.go:164] Run: docker exec embed-certs-20220531175604-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:56:18.433758  243743 oci.go:247] the created container "embed-certs-20220531175604-6903" has a running status.
	I0531 17:56:18.433787  243743 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa...
	I0531 17:56:18.651045  243743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:56:18.737122  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.772057  243743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:56:18.772085  243743 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220531175604-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:56:18.848259  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.882465  243743 machine.go:88] provisioning docker machine ...
	I0531 17:56:18.882498  243743 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 17:56:18.882541  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:18.915976  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:18.916173  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:18.916203  243743 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 17:56:19.035081  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 17:56:19.035195  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.067189  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:19.067362  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:19.067394  243743 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:56:19.174458  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:56:19.174496  243743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:56:19.174513  243743 ubuntu.go:177] setting up certificates
	I0531 17:56:19.174522  243743 provision.go:83] configureAuth start
	I0531 17:56:19.174563  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.206506  243743 provision.go:138] copyHostCerts
	I0531 17:56:19.206555  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:56:19.206563  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:56:19.206631  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:56:19.206727  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:56:19.206748  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:56:19.206785  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:56:19.206926  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:56:19.206955  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:56:19.206994  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:56:19.207074  243743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 17:56:19.354064  243743 provision.go:172] copyRemoteCerts
	I0531 17:56:19.354118  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:56:19.354167  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.385431  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.465829  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:56:19.482372  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 17:56:19.498469  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 17:56:19.514414  243743 provision.go:86] duration metric: configureAuth took 339.882889ms
	I0531 17:56:19.514440  243743 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:56:19.514580  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:19.514593  243743 machine.go:91] provisioned docker machine in 632.108026ms
	I0531 17:56:19.514598  243743 client.go:171] LocalClient.Create took 14.394605814s
	I0531 17:56:19.514618  243743 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220531175604-6903" took 14.394651417s
	I0531 17:56:19.514628  243743 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:19.514633  243743 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:56:19.514668  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:56:19.514710  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.545303  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.630150  243743 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:56:19.632669  243743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:56:19.632694  243743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:56:19.632704  243743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:56:19.632709  243743 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:56:19.632717  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:56:19.632765  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:56:19.632826  243743 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:56:19.632902  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:56:19.639100  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:19.655411  243743 start.go:309] post-start completed in 140.773803ms
	I0531 17:56:19.655734  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.685847  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:19.686049  243743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:56:19.686083  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.714435  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.795018  243743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:56:19.798696  243743 start.go:134] duration metric: createHost completed in 14.681876121s
	I0531 17:56:19.798719  243743 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 14.682014704s
	I0531 17:56:19.798794  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.828548  243743 ssh_runner.go:195] Run: systemctl --version
	I0531 17:56:19.828596  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.828647  243743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:56:19.828701  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.859164  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.861276  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.956313  243743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:56:19.965586  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:56:19.973861  243743 docker.go:187] disabling docker service ...
	I0531 17:56:19.973905  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:56:19.988442  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:56:19.996566  243743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:56:20.078652  243743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:56:20.157370  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:56:20.165848  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:56:20.177829  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:56:20.190831  243743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:56:20.196751  243743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:56:20.202752  243743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:56:20.278651  243743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:56:20.337554  243743 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:56:20.337615  243743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:56:20.341007  243743 start.go:468] Will wait 60s for crictl version
	I0531 17:56:20.341061  243743 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:56:20.365853  243743 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:56:20.365913  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.392463  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.420160  243743 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:56:20.421445  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:20.449939  243743 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 17:56:20.452980  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.464025  243743 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:56:20.545068  242818 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:20.545189  242818 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 91.113µs
	I0531 17:56:20.545218  242818 start.go:94] Skipping create...Using existing machine configuration
	I0531 17:56:20.545229  242818 fix.go:55] fixHost starting: 
	I0531 17:56:20.545535  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.578422  242818 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state= err=<nil>
	I0531 17:56:20.578447  242818 fix.go:108] machineExists: false. err=machine does not exist
	I0531 17:56:20.580567  242818 out.go:177] * docker "newest-cni-20220531175602-6903" container is missing, will recreate.
	I0531 17:56:20.465273  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:20.465321  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.488641  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.488661  243743 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:56:20.488697  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.509499  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.509516  243743 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:56:20.509549  243743 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:56:20.531616  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:20.531635  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:20.531651  243743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:56:20.531662  243743 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:56:20.531784  243743 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:56:20.531857  243743 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 17:56:20.531896  243743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:56:20.538217  243743 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:56:20.538272  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:56:20.544669  243743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 17:56:20.559115  243743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:56:20.572041  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 17:56:20.584477  243743 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:56:20.587199  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.595757  243743 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 17:56:20.595849  243743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:56:20.595883  243743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:56:20.595923  243743 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 17:56:20.595935  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt with IP's: []
	I0531 17:56:20.865002  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt ...
	I0531 17:56:20.865034  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt: {Name:mk983f1351054e3a81162f051295cd0c506fcbd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865199  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key ...
	I0531 17:56:20.865212  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key: {Name:mk405f0b57d526c28409acadcba4d956d1f0d13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865300  243743 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 17:56:20.865315  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:56:21.075006  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 ...
	I0531 17:56:21.075031  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2: {Name:mk45ad740db68b95692d916b33d8e02d8dba1ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075247  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 ...
	I0531 17:56:21.075264  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2: {Name:mkf42168f7ee31852f4a02d1ef506d7d5a8f7b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075382  243743 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt
	I0531 17:56:21.075465  243743 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key
	I0531 17:56:21.075522  243743 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 17:56:21.075541  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt with IP's: []
	I0531 17:56:21.134487  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt ...
	I0531 17:56:21.134511  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt: {Name:mk1893a7aa78a1763283fdce57e297466ab59148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134682  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key ...
	I0531 17:56:21.134698  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key: {Name:mk25a32f5f241a2369c40634bda3a1e4c75a34a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134919  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:56:21.134955  243743 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:56:21.134967  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:56:21.134989  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:56:21.135011  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:56:21.135032  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:56:21.135067  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:21.135606  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:56:21.153391  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:56:21.169555  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:56:21.185845  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 17:56:21.201921  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:56:21.217951  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:56:21.234159  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:56:21.250138  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:56:21.266294  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:56:21.282188  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:56:21.298126  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:56:21.313822  243743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:56:21.325312  243743 ssh_runner.go:195] Run: openssl version
	I0531 17:56:21.329803  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:56:21.336618  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339408  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339449  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.343890  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:56:21.350793  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:56:21.357753  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360608  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360649  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.365192  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:56:21.372353  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:56:21.379136  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382027  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382074  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.386461  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:56:21.393159  243743 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:21.393235  243743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:56:21.393282  243743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:56:21.417667  243743 cri.go:87] found id: ""
	I0531 17:56:21.417716  243743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:56:21.424169  243743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:56:21.430469  243743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:56:21.430524  243743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:56:21.436945  243743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:56:21.436981  243743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:56:20.581860  242818 delete.go:124] DEMOLISHING newest-cni-20220531175602-6903 ...
	I0531 17:56:20.581935  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.611874  242818 stop.go:79] host is in state 
	I0531 17:56:20.611927  242818 main.go:134] libmachine: Stopping "newest-cni-20220531175602-6903"...
	I0531 17:56:20.611984  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.641012  242818 kic_runner.go:93] Run: systemctl --version
	I0531 17:56:20.641037  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 systemctl --version]
	I0531 17:56:20.671812  242818 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 17:56:20.671834  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo service kubelet stop]
	I0531 17:56:20.701522  242818 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	
	** /stderr **
	W0531 17:56:20.701538  242818 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.701588  242818 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 17:56:20.701603  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo service kubelet stop]
	I0531 17:56:20.731170  242818 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	
	** /stderr **
	W0531 17:56:20.731205  242818 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.731224  242818 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0531 17:56:20.731280  242818 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0531 17:56:20.731291  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0531 17:56:20.760003  242818 kic.go:452] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:56:20.760024  242818 kic.go:462] successfully stopped kubernetes!
	I0531 17:56:20.760059  242818 kic_runner.go:93] Run: pgrep kube-apiserver
	I0531 17:56:20.760069  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 pgrep kube-apiserver]
	I0531 17:56:20.819896  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:20.227821  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:22.727263  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:21.067679  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:23.568416  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:21.673625  243743 out.go:204]   - Generating certificates and keys ...
	I0531 17:56:24.424606  243743 out.go:204]   - Booting up control plane ...
	I0531 17:56:23.853003  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:26.887653  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:24.727660  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:27.227033  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:26.067591  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:28.067620  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:29.919576  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:29.227948  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:31.228307  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:33.727689  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:30.067718  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:32.068461  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:35.960123  243743 out.go:204]   - Configuring RBAC rules ...
	I0531 17:56:36.372484  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:36.372507  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:36.374314  243743 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 17:56:32.961228  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:35.995237  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:35.727884  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:38.227543  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:34.568127  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:37.067720  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:39.067893  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:36.375646  243743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 17:56:36.378972  243743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:56:36.378987  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 17:56:36.391818  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:56:37.134192  243743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:56:37.134265  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.134293  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.216998  243743 ops.go:34] apiserver oom_adj: -16
	I0531 17:56:37.217013  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.772320  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.272340  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.772405  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.271965  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.035267  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:42.067914  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:40.727726  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:43.226875  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:41.568146  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:43.568231  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:39.772232  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.272312  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.772850  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.271928  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.772325  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.272975  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.772307  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.272387  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.772248  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:44.272332  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.101303  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:45.227722  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:47.726708  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:46.067788  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:48.068015  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:44.771976  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.272400  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.772386  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.272935  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.772442  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.272553  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.771899  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.272016  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.772018  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.272518  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.772710  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.826843  243743 kubeadm.go:1045] duration metric: took 12.692623892s to wait for elevateKubeSystemPrivileges.
	I0531 17:56:49.826874  243743 kubeadm.go:397] StartCluster complete in 28.433719659s
	I0531 17:56:49.826894  243743 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:49.826995  243743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:49.829203  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:50.344768  243743 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 17:56:50.344838  243743 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:50.346548  243743 out.go:177] * Verifying Kubernetes components...
	I0531 17:56:50.344908  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:56:50.345125  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:50.345145  243743 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:56:50.347978  243743 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348002  243743 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.348010  243743 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:56:50.348024  243743 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348045  243743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 17:56:50.348060  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.347983  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:56:50.348416  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.348631  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.389959  243743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:56:50.391301  243743 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.391319  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:56:50.391356  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.404254  243743 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.404284  243743 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:56:50.404312  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.404824  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.426361  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.436141  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:56:50.437624  243743 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 17:56:50.442307  243743 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.442324  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:56:50.442358  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.480148  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.522400  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.716771  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.801813  243743 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 17:56:50.954578  243743 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 17:56:48.133829  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:51.167150  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:49.727537  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:52.227351  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:50.068503  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:52.568352  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:50.955732  243743 addons.go:417] enableAddons completed in 610.588239ms
	I0531 17:56:52.444126  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:54.200343  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:57.235252  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:54.727443  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:57.227430  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:56:55.068307  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:57.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:56:54.943796  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:57.443053  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:59.443687  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:00.275305  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:56:59.726872  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:01.727580  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:03.727723  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:00.068172  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:02.568206  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:01.943080  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:03.944340  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:03.307575  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:06.343271  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:06.226943  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:08.227336  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:05.067673  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:07.067801  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:06.443747  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:08.942997  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:09.378462  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:12.412507  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:10.227620  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:12.228117  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:09.567953  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:12.067156  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:14.068074  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:10.943575  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:13.443323  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:15.447289  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:14.726998  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:16.727759  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:16.567424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:18.567581  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:15.443608  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:17.944142  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:18.480515  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:21.512524  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:19.226771  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:21.227348  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:23.227471  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:20.567641  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:22.567732  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:20.443463  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:22.443654  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:24.444191  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:24.547268  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:27.579950  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:25.227788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:27.726749  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:24.568096  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:26.568336  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:28.568424  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:26.943262  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:28.943501  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:30.614807  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:29.728837  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:32.227880  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:31.068035  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:33.568490  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:30.943907  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:32.944301  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:33.647113  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:36.680389  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:34.727064  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:37.226651  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:36.067248  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:38.068153  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:35.443484  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:37.943999  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:39.713424  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:39.226899  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:41.227775  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:43.227898  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:40.567678  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:42.567872  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:40.443245  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:42.444152  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:42.747612  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:45.781754  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:45.727788  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:48.227436  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:44.568143  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:47.068158  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:44.944382  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:47.443968  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:49.445784  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:48.815262  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:51.847719  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:50.727048  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:52.727666  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:49.568530  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:52.067793  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:54.068016  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:51.944333  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:54.443331  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:54.881323  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:54.727760  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:57.227431  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:57:56.567511  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:58.567932  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:57:56.443483  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:58.942972  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:57.915359  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:00.949899  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:57:59.727047  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:01.727196  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:00.568483  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:03.068327  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:00.944139  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:03.443519  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:03.982733  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:07.016348  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:04.226687  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:06.227052  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:08.227935  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:05.566754  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:07.567450  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:05.943200  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:07.943995  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:10.053975  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:10.726683  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:12.727527  230185 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 17:58:13.729365  230185 node_ready.go:38] duration metric: took 4m0.008516004s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 17:58:13.731613  230185 out.go:177] 
	W0531 17:58:13.733108  230185 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 17:58:13.733126  230185 out.go:239] * 
	W0531 17:58:13.733818  230185 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:58:13.735217  230185 out.go:177] 
	I0531 17:58:09.568294  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:12.067392  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:14.068088  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:10.443423  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:12.943328  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:13.086502  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:16.121077  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:16.567914  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:19.067263  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:14.943958  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:17.443050  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:19.443916  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:19.154369  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:22.189477  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:21.067407  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:23.067542  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:21.943559  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:23.944354  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:25.223267  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:25.068210  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:27.568275  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:26.443517  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:28.942984  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:28.255945  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:31.288435  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:30.067941  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:32.068112  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:30.943939  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:32.944212  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:34.323274  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:37.356343  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:34.567004  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:36.567570  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:38.568081  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:35.443909  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:37.944150  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:40.391929  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:40.568431  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:42.569999  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:40.444242  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:42.943610  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:43.423636  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:46.459112  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:45.067990  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:47.568114  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:45.443056  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:47.443392  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:49.492312  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:52.525424  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:49.568309  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:52.067555  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:49.943222  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:51.944006  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:54.443561  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:55.559271  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:58:54.567411  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:57.067999  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:59.068182  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:58:56.443946  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:58.942966  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:58.592350  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:01.624923  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:01.568144  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:04.067808  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:00.943933  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:03.443196  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:04.657327  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:07.691272  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:06.567749  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:08.568349  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:05.443952  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:07.943843  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:10.724305  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:11.067636  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:13.067889  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:09.944329  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:12.444402  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:13.757438  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:16.791990  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:15.566455  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:17.567601  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:14.943225  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:16.944023  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:19.443638  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:19.824931  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:20.067425  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:22.567842  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:21.943242  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:23.944004  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:22.857078  242818 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0531 17:59:22.857126  242818 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0531 17:59:22.857657  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	W0531 17:59:22.891056  242818 delete.go:135] deletehost failed: Docker machine "newest-cni-20220531175602-6903" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 17:59:22.891135  242818 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220531175602-6903
	I0531 17:59:22.921574  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:22.951969  242818 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-20220531175602-6903 /bin/bash -c "sudo init 0"
	W0531 17:59:22.981414  242818 cli_runner.go:211] docker exec --privileged -t newest-cni-20220531175602-6903 /bin/bash -c "sudo init 0" returned with exit code 1
	I0531 17:59:22.981448  242818 oci.go:625] error shutdown newest-cni-20220531175602-6903: docker exec --privileged -t newest-cni-20220531175602-6903 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container d9e57d3709e8d68983434d27bee00fb4ba62e271534f20acfbe1c439e4aa301a is not running
	I0531 17:59:23.981612  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:24.014733  242818 oci.go:639] temporary error: container newest-cni-20220531175602-6903 status is  but expect it to be exited
	I0531 17:59:24.014768  242818 oci.go:645] Successfully shutdown container newest-cni-20220531175602-6903
	I0531 17:59:24.014803  242818 cli_runner.go:164] Run: docker rm -f -v newest-cni-20220531175602-6903
	I0531 17:59:24.051211  242818 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-20220531175602-6903
	W0531 17:59:24.081170  242818 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:59:24.081236  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:59:24.110818  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:59:24.110869  242818 network_create.go:272] running [docker network inspect newest-cni-20220531175602-6903] to gather additional debugging logs...
	I0531 17:59:24.110890  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903
	W0531 17:59:24.140519  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:59:24.140544  242818 network_create.go:275] error running [docker network inspect newest-cni-20220531175602-6903]: docker network inspect newest-cni-20220531175602-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220531175602-6903
	I0531 17:59:24.140558  242818 network_create.go:277] output of [docker network inspect newest-cni-20220531175602-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220531175602-6903
	
	** /stderr **
	W0531 17:59:24.140718  242818 delete.go:139] delete failed (probably ok) <nil>
	I0531 17:59:24.140732  242818 fix.go:115] Sleeping 1 second for extra luck!
	I0531 17:59:25.141584  242818 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:59:25.143679  242818 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:59:25.143804  242818 start.go:165] libmachine.API.Create for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 17:59:25.143845  242818 client.go:168] LocalClient.Create starting
	I0531 17:59:25.143931  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:59:25.143973  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:59:25.143997  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:59:25.144067  242818 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:59:25.144093  242818 main.go:134] libmachine: Decoding PEM data...
	I0531 17:59:25.144113  242818 main.go:134] libmachine: Parsing certificate...
	I0531 17:59:25.144337  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:59:25.176559  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:59:25.176621  242818 network_create.go:272] running [docker network inspect newest-cni-20220531175602-6903] to gather additional debugging logs...
	I0531 17:59:25.176645  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903
	W0531 17:59:25.207930  242818 cli_runner.go:211] docker network inspect newest-cni-20220531175602-6903 returned with exit code 1
	I0531 17:59:25.207966  242818 network_create.go:275] error running [docker network inspect newest-cni-20220531175602-6903]: docker network inspect newest-cni-20220531175602-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220531175602-6903
	I0531 17:59:25.207991  242818 network_create.go:277] output of [docker network inspect newest-cni-20220531175602-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220531175602-6903
	
	** /stderr **
	I0531 17:59:25.208075  242818 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:59:25.239738  242818 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-810e286ea246 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:ed:95:7c}}
	I0531 17:59:25.240421  242818 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.58.0:0xc000010158] amended:false}} dirty:map[] misses:0}
	I0531 17:59:25.240452  242818 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:59:25.240465  242818 network_create.go:115] attempt to create docker network newest-cni-20220531175602-6903 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 17:59:25.240505  242818 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220531175602-6903
	I0531 17:59:25.305356  242818 network_create.go:99] docker network newest-cni-20220531175602-6903 192.168.58.0/24 created
	I0531 17:59:25.305387  242818 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20220531175602-6903" container
	I0531 17:59:25.305436  242818 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:59:25.340923  242818 cli_runner.go:164] Run: docker volume create newest-cni-20220531175602-6903 --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:59:25.371246  242818 oci.go:103] Successfully created a docker volume newest-cni-20220531175602-6903
	I0531 17:59:25.371328  242818 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220531175602-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --entrypoint /usr/bin/test -v newest-cni-20220531175602-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:59:25.843987  242818 oci.go:107] Successfully prepared a docker volume newest-cni-20220531175602-6903
	I0531 17:59:25.844027  242818 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:59:25.844045  242818 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:59:25.844093  242818 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:59:24.567885  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:26.568273  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:29.067838  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:25.944401  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:28.444502  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:31.190088  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:33.568696  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:30.943797  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:32.944129  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:33.512252  242818 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220531175602-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.668105239s)
	I0531 17:59:33.512281  242818 kic.go:188] duration metric: took 7.668233 seconds to extract preloaded images to volume
	W0531 17:59:33.512412  242818 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:59:33.512501  242818 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:59:33.614424  242818 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220531175602-6903 --name newest-cni-20220531175602-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220531175602-6903 --network newest-cni-20220531175602-6903 --ip 192.168.58.2 --volume newest-cni-20220531175602-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:59:34.011653  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Running}}
	I0531 17:59:34.048623  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:34.082256  242818 cli_runner.go:164] Run: docker exec newest-cni-20220531175602-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:59:34.143338  242818 oci.go:247] the created container "newest-cni-20220531175602-6903" has a running status.
	I0531 17:59:34.143372  242818 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa...
	I0531 17:59:34.370535  242818 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:59:34.459430  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:34.493044  242818 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:59:34.493074  242818 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220531175602-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:59:34.574036  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 17:59:34.606509  242818 machine.go:88] provisioning docker machine ...
	I0531 17:59:34.606554  242818 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 17:59:34.606610  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:34.642142  242818 main.go:134] libmachine: Using SSH client type: native
	I0531 17:59:34.642323  242818 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0531 17:59:34.642345  242818 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 17:59:34.759223  242818 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 17:59:34.759292  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:34.790712  242818 main.go:134] libmachine: Using SSH client type: native
	I0531 17:59:34.790875  242818 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0531 17:59:34.790910  242818 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:59:34.898387  242818 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:59:34.898415  242818 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:59:34.898432  242818 ubuntu.go:177] setting up certificates
	I0531 17:59:34.898440  242818 provision.go:83] configureAuth start
	I0531 17:59:34.898478  242818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 17:59:34.930086  242818 provision.go:138] copyHostCerts
	I0531 17:59:34.930141  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:59:34.930153  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:59:34.930216  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:59:34.930286  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:59:34.930297  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:59:34.930324  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:59:34.930371  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:59:34.930382  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:59:34.930415  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:59:34.930490  242818 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 17:59:35.104614  242818 provision.go:172] copyRemoteCerts
	I0531 17:59:35.104663  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:59:35.104698  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:35.136707  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:35.221906  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:59:35.238792  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0531 17:59:35.255481  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 17:59:35.271545  242818 provision.go:86] duration metric: configureAuth took 373.098159ms
	I0531 17:59:35.271564  242818 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:59:35.271739  242818 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:59:35.271754  242818 machine.go:91] provisioned docker machine in 665.221459ms
	I0531 17:59:35.271759  242818 client.go:171] LocalClient.Create took 10.127905349s
	I0531 17:59:35.271774  242818 start.go:173] duration metric: libmachine.API.Create for "newest-cni-20220531175602-6903" took 10.127972134s
	I0531 17:59:35.271783  242818 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 17:59:35.271788  242818 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:59:35.271828  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:59:35.271874  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:35.302917  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:35.386187  242818 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:59:35.388672  242818 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:59:35.388694  242818 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:59:35.388703  242818 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:59:35.388708  242818 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:59:35.388720  242818 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:59:35.388762  242818 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:59:35.388826  242818 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:59:35.388911  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:59:35.395113  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:59:35.411604  242818 start.go:309] post-start completed in 139.811523ms
	I0531 17:59:35.411890  242818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 17:59:35.442989  242818 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 17:59:35.443280  242818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:59:35.443333  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:35.473685  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:35.555841  242818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:59:35.559683  242818 start.go:134] duration metric: createHost completed in 10.418071687s
	I0531 17:59:35.559743  242818 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	W0531 17:59:35.591125  242818 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 17:59:35.591176  242818 machine.go:88] provisioning docker machine ...
	I0531 17:59:35.591197  242818 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 17:59:35.591238  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:35.621175  242818 main.go:134] libmachine: Using SSH client type: native
	I0531 17:59:35.621365  242818 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0531 17:59:35.621383  242818 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 17:59:35.742596  242818 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 17:59:35.742674  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:35.774170  242818 main.go:134] libmachine: Using SSH client type: native
	I0531 17:59:35.774328  242818 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0531 17:59:35.774349  242818 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:59:35.886587  242818 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:59:35.886616  242818 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:59:35.886635  242818 ubuntu.go:177] setting up certificates
	I0531 17:59:35.886647  242818 provision.go:83] configureAuth start
	I0531 17:59:35.886691  242818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 17:59:35.917473  242818 provision.go:138] copyHostCerts
	I0531 17:59:35.917531  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:59:35.917545  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:59:35.917598  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:59:35.917689  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:59:35.917706  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:59:35.917735  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:59:35.917797  242818 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:59:35.917810  242818 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:59:35.917835  242818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:59:35.917892  242818 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 17:59:36.209733  242818 provision.go:172] copyRemoteCerts
	I0531 17:59:36.209788  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:59:36.209828  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:36.241545  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:36.322116  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:59:36.338499  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 17:59:36.354885  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 17:59:36.371643  242818 provision.go:86] duration metric: configureAuth took 484.984825ms
	I0531 17:59:36.371663  242818 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:59:36.371835  242818 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:59:36.371847  242818 machine.go:91] provisioned docker machine in 780.665824ms
	I0531 17:59:36.371853  242818 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 17:59:36.371862  242818 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:59:36.371907  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:59:36.371941  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:36.403796  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:36.490238  242818 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:59:36.492885  242818 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:59:36.492907  242818 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:59:36.492924  242818 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:59:36.492932  242818 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:59:36.492948  242818 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:59:36.492998  242818 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:59:36.493079  242818 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:59:36.493173  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:59:36.499509  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:59:36.515893  242818 start.go:309] post-start completed in 144.026672ms
	I0531 17:59:36.515956  242818 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:59:36.516003  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:36.547568  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:36.627418  242818 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:59:36.631071  242818 fix.go:57] fixHost completed within 3m16.085837315s
	I0531 17:59:36.631098  242818 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 3m16.085891506s
	I0531 17:59:36.631202  242818 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 17:59:36.661648  242818 ssh_runner.go:195] Run: sudo service crio stop
	I0531 17:59:36.661693  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:36.661706  242818 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:59:36.661756  242818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 17:59:36.693497  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:36.694578  242818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 17:59:37.177231  242818 openrc.go:165] stop output: 
	I0531 17:59:37.177294  242818 ssh_runner.go:195] Run: sudo service crio status
	I0531 17:59:37.194474  242818 docker.go:187] disabling docker service ...
	I0531 17:59:37.194522  242818 ssh_runner.go:195] Run: sudo service docker.socket stop
	I0531 17:59:37.544629  242818 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	
	** /stderr **
	E0531 17:59:37.544659  242818 docker.go:190] "Failed to stop" err=<
		sudo service docker.socket stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	 > service="docker.socket"
	I0531 17:59:37.544703  242818 ssh_runner.go:195] Run: sudo service docker.service stop
	I0531 17:59:36.068281  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:38.567893  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:35.443699  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:37.943489  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:37.891591  242818 openrc.go:165] stop output: 
	** stderr ** 
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	
	** /stderr **
	E0531 17:59:37.891617  242818 docker.go:193] "Failed to stop" err=<
		sudo service docker.service stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.service.service: Unit docker.service.service not loaded.
	 > service="docker.service"
	W0531 17:59:37.891624  242818 cruntime.go:284] disable failed: sudo service docker.service stop: Process exited with status 5
	stdout:
	
	stderr:
	Failed to stop docker.service.service: Unit docker.service.service not loaded.
	I0531 17:59:37.891664  242818 ssh_runner.go:195] Run: sudo service docker status
	W0531 17:59:37.906790  242818 containerd.go:245] disableOthers: Docker is still active
	I0531 17:59:37.906935  242818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:59:37.919096  242818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:59:37.931539  242818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:59:37.937431  242818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:59:37.943645  242818 ssh_runner.go:195] Run: sudo service containerd restart
	I0531 17:59:38.016466  242818 openrc.go:152] restart output: 
	I0531 17:59:38.016494  242818 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:59:38.016541  242818 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:59:38.020067  242818 start.go:468] Will wait 60s for crictl version
	I0531 17:59:38.020129  242818 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:59:38.049272  242818 retry.go:31] will retry after 8.009118606s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T17:59:38Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 17:59:41.067896  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:43.567459  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:39.943619  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:41.943950  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:44.443800  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:46.059254  242818 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:59:46.082489  242818 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:59:46.082538  242818 ssh_runner.go:195] Run: containerd --version
	I0531 17:59:46.109100  242818 ssh_runner.go:195] Run: containerd --version
	I0531 17:59:46.136641  242818 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:59:46.138396  242818 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:59:46.169675  242818 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 17:59:46.172804  242818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:59:46.183390  242818 out.go:177]   - kubelet.network-plugin=cni
	I0531 17:59:46.184936  242818 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 17:59:46.186314  242818 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:59:46.187548  242818 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:59:46.187598  242818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:59:46.210423  242818 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:59:46.210442  242818 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:59:46.210483  242818 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:59:46.233732  242818 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:59:46.233754  242818 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:59:46.233796  242818 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:59:46.255562  242818 cni.go:95] Creating CNI manager for ""
	I0531 17:59:46.255586  242818 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:59:46.255604  242818 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 17:59:46.255622  242818 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531175602-6903 NodeName:newest-cni-20220531175602-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:59:46.255787  242818 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220531175602-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:59:46.255890  242818 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531175602-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 17:59:46.255951  242818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:59:46.262710  242818 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:59:46.263262  242818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0531 17:59:46.270525  242818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0531 17:59:46.282220  242818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:59:46.294003  242818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0531 17:59:46.305593  242818 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0531 17:59:46.317164  242818 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0531 17:59:46.328791  242818 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:59:46.331365  242818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:59:46.339756  242818 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903 for IP: 192.168.58.2
	I0531 17:59:46.339836  242818 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:59:46.339873  242818 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:59:46.339916  242818 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key
	I0531 17:59:46.339929  242818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.crt with IP's: []
	I0531 17:59:46.878263  242818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.crt ...
	I0531 17:59:46.878294  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.crt: {Name:mk5b4327f53a508fbfec97e04847b681fa1dfca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:46.878475  242818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key ...
	I0531 17:59:46.878491  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key: {Name:mk5926e587d5fa34977e918e4b3054e501eed0c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:46.878577  242818 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041
	I0531 17:59:46.878593  242818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:59:47.000397  242818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt.cee25041 ...
	I0531 17:59:47.000421  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt.cee25041: {Name:mkcb31410435348b414386661c66e041e23705dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:47.000586  242818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041 ...
	I0531 17:59:47.000599  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041: {Name:mk1b036224ad60a27f6ef4264d6c66ed4c34ac07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:47.000680  242818 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt
	I0531 17:59:47.000741  242818 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key
	I0531 17:59:47.000785  242818 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key
	I0531 17:59:47.000800  242818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt with IP's: []
	I0531 17:59:47.106797  242818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt ...
	I0531 17:59:47.106827  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt: {Name:mk79c4d5b07797858f94757e7ef1d9346d5a240c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:47.107034  242818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key ...
	I0531 17:59:47.107052  242818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key: {Name:mk039f4956a74a46b1dfa1940a0c9ce145393fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:59:47.107301  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:59:47.107363  242818 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:59:47.107385  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:59:47.107440  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:59:47.107482  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:59:47.107519  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:59:47.107587  242818 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:59:47.108127  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:59:47.125915  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:59:47.142329  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:59:47.158484  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 17:59:47.174629  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:59:47.190773  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:59:47.206977  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:59:47.222866  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:59:47.238834  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:59:47.254752  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:59:47.270751  242818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:59:47.286782  242818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:59:47.298447  242818 ssh_runner.go:195] Run: openssl version
	I0531 17:59:47.302719  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:59:47.309328  242818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:59:47.312116  242818 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:59:47.312157  242818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:59:47.316479  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:59:47.323036  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:59:47.329896  242818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:59:47.332627  242818 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:59:47.332666  242818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:59:47.337010  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:59:47.343593  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:59:47.350007  242818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:59:47.352752  242818 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:59:47.352794  242818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:59:47.357058  242818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:59:47.363870  242818 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:59:47.363942  242818 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:59:47.363971  242818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:59:47.387634  242818 cri.go:87] found id: ""
	I0531 17:59:47.387685  242818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:59:47.394054  242818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:59:47.400487  242818 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:59:47.400526  242818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:59:47.406596  242818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:59:47.406642  242818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:59:47.648324  242818 out.go:204]   - Generating certificates and keys ...
	I0531 17:59:46.067986  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:48.568083  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:46.944150  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:49.443556  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:50.568175  237733 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 17:59:51.570285  237733 node_ready.go:38] duration metric: took 4m0.009297795s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 17:59:51.572685  237733 out.go:177] 
	W0531 17:59:51.574080  237733 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 17:59:51.574098  237733 out.go:239] * 
	W0531 17:59:51.574853  237733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 17:59:51.576647  237733 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5463b77511769       6de166512aa22       About a minute ago   Exited              kindnet-cni               4                   512d6145343b2
	cb3e6f9b5d67c       4c03754524064       4 minutes ago        Running             kube-proxy                0                   95cdf505c32bc
	a2c6538b95f74       595f327f224a4       4 minutes ago        Running             kube-scheduler            0                   1f2c20e63b683
	1b1996168f6e9       8fa62c12256df       4 minutes ago        Running             kube-apiserver            0                   6051433bcfd54
	509e04aaab068       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   0                   de31468fb264b
	ea294bc0a9be2       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   3eec3f7ca8031
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 17:59:52 UTC. --
	May 31 17:56:41 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:56:41.436170608Z" level=warning msg="cleaning up after shim disconnected" id=4936679167f798401a094836614927d4ed026124d96dc8d239ddf4ddea62fdf1 namespace=k8s.io
	May 31 17:56:41 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:56:41.436188481Z" level=info msg="cleaning up dead shim"
	May 31 17:56:41 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:56:41.445366924Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:56:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2203 runtime=io.containerd.runc.v2\n"
	May 31 17:56:41 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:56:41.982581257Z" level=info msg="RemoveContainer for \"aad965451286f51e6aa58fc1dfbda36374f36e50de8a60f0efbd66d590fbc776\""
	May 31 17:56:41 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:56:41.986270575Z" level=info msg="RemoveContainer for \"aad965451286f51e6aa58fc1dfbda36374f36e50de8a60f0efbd66d590fbc776\" returns successfully"
	May 31 17:57:12 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:12.031130356Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 17:57:12 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:12.042330884Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\""
	May 31 17:57:12 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:12.042742465Z" level=info msg="StartContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\""
	May 31 17:57:12 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:12.115410723Z" level=info msg="StartContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\" returns successfully"
	May 31 17:57:22 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:22.339237574Z" level=info msg="shim disconnected" id=55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725
	May 31 17:57:22 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:22.339299951Z" level=warning msg="cleaning up after shim disconnected" id=55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725 namespace=k8s.io
	May 31 17:57:22 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:22.339312424Z" level=info msg="cleaning up dead shim"
	May 31 17:57:22 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:22.348823412Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:57:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2280 runtime=io.containerd.runc.v2\n"
	May 31 17:57:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:23.307518015Z" level=info msg="RemoveContainer for \"4936679167f798401a094836614927d4ed026124d96dc8d239ddf4ddea62fdf1\""
	May 31 17:57:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:57:23.311945168Z" level=info msg="RemoveContainer for \"4936679167f798401a094836614927d4ed026124d96dc8d239ddf4ddea62fdf1\" returns successfully"
	May 31 17:58:13 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:13.031356985Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	May 31 17:58:13 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:13.043548716Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\""
	May 31 17:58:13 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:13.043950789Z" level=info msg="StartContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\""
	May 31 17:58:13 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:13.205602375Z" level=info msg="StartContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\" returns successfully"
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436332222Z" level=info msg="shim disconnected" id=5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436395795Z" level=warning msg="cleaning up after shim disconnected" id=5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34 namespace=k8s.io
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436406915Z" level=info msg="cleaning up dead shim"
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.445684136Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\n"
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.415891765Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\""
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.419889760Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531175509-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531175509-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_55_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:55:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531175509-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 17:59:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 17:55:50 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 17:55:50 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 17:55:50 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 17:55:50 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220531175509-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                6be22935-bf30-494f-8e0a-066b777ef988
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220531175509-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kindnet-vdbp9                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531175509-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531175509-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-ff6gx                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531175509-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m     kube-proxy  
	  Normal  Starting                 4m11s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f] <==
	* {"level":"info","ts":"2022-05-31T17:55:31.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:31.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:31.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:31.829Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220531175509-6903 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:56:05.802Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"200.644923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-05-31T17:56:05.802Z","caller":"traceutil/trace.go:171","msg":"trace[1170885200] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:476; }","duration":"200.814942ms","start":"2022-05-31T17:56:05.602Z","end":"2022-05-31T17:56:05.802Z","steps":["trace[1170885200] 'agreement among raft nodes before linearized reading'  (duration: 97.859628ms)","trace[1170885200] 'range keys from in-memory index tree'  (duration: 102.728736ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"169.329455ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638328710165085387 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:476 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414956673310309577 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[343527482] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"197.187499ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[343527482] 'read index received'  (duration: 26.832028ms)","trace[343527482] 'applied index is now lower than readState.Index'  (duration: 170.353994ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"197.426091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[1430337056] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:478; }","duration":"197.45664ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[1430337056] 'agreement among raft nodes before linearized reading'  (duration: 197.296156ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:15.763Z","caller":"traceutil/trace.go:171","msg":"trace[1158323802] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"230.143408ms","start":"2022-05-31T17:56:15.532Z","end":"2022-05-31T17:56:15.763Z","steps":["trace[1158323802] 'process raft request'  (duration: 59.357333ms)","trace[1158323802] 'compare'  (duration: 168.812361ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:16.147Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.435587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:56:16.147Z","caller":"traceutil/trace.go:171","msg":"trace[234350805] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:478; }","duration":"275.522421ms","start":"2022-05-31T17:56:15.872Z","end":"2022-05-31T17:56:16.147Z","steps":["trace[234350805] 'range keys from in-memory index tree'  (duration: 275.375333ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:59:31.188Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.089567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:59:31.188Z","caller":"traceutil/trace.go:171","msg":"trace[1870884025] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:559; }","duration":"122.184032ms","start":"2022-05-31T17:59:31.066Z","end":"2022-05-31T17:59:31.188Z","steps":["trace[1870884025] 'range keys from in-memory index tree'  (duration: 121.959844ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:59:52 up  1:42,  0 users,  load average: 0.82, 1.00, 1.56
	Linux default-k8s-different-port-20220531175509-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11] <==
	* I0531 17:55:34.101758       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:55:34.111206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:55:34.111380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:55:34.111710       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:55:34.111829       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:55:34.119947       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 17:55:34.997992       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:55:34.998017       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:55:35.015412       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:55:35.019403       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:55:35.019422       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:55:35.375475       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:55:35.417331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:55:35.533778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:55:35.540935       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0531 17:55:35.541792       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:55:35.545091       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:55:36.131709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:55:36.909454       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:55:36.916783       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:55:36.925822       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:55:42.014482       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:55:51.091344       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:55:51.190456       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:55:52.128829       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999] <==
	* I0531 17:55:50.394877       1 shared_informer.go:247] Caches are synced for HPA 
	I0531 17:55:50.406053       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:55:50.437491       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:55:50.438548       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:55:50.438580       1 shared_informer.go:247] Caches are synced for GC 
	I0531 17:55:50.438605       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:55:50.438586       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 17:55:50.438629       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:55:50.438646       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0531 17:55:50.537706       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:55:50.537739       1 disruption.go:371] Sending events to api server.
	I0531 17:55:50.542100       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.546629       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.574949       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:55:50.588247       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:55:50.965267       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037122       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037154       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:55:51.095058       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:55:51.107553       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:55:51.196401       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vdbp9"
	I0531 17:55:51.200003       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff6gx"
	I0531 17:55:51.342466       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z47gr"
	I0531 17:55:51.346589       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-92zgx"
	I0531 17:55:51.362421       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z47gr"
	
	* 
	* ==> kube-proxy [cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783] <==
	* I0531 17:55:52.033542       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0531 17:55:52.033619       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0531 17:55:52.033664       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:55:52.125079       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:55:52.125116       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:55:52.125125       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:55:52.125149       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:55:52.125539       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:55:52.126126       1 config.go:317] "Starting service config controller"
	I0531 17:55:52.126162       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:55:52.126352       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:55:52.126370       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:55:52.227300       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:55:52.227972       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb] <==
	* W0531 17:55:34.201559       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:55:34.201633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:55:34.201879       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:55:34.202010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:55:34.202066       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202150       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202470       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:55:34.202627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:55:34.202947       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:55:34.203128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:55:34.204109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:34.204191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:55:34.204440       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:55:34.204494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:55:35.025337       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:55:35.025375       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:55:35.045433       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:55:35.045468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:55:35.202591       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:55:35.202639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:35.202763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:35.202795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 17:55:37.118161       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 17:59:52 UTC. --
	May 31 17:58:42 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:58:42.250925    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:47 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:58:47.252247    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 17:58:52.028486    1298 scope.go:110] "RemoveContainer" containerID="5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34"
	May 31 17:58:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:58:52.028767    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 17:58:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:58:52.253982    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:58:57 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:58:57.255668    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:02 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:02.256756    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:03 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 17:59:03.028539    1298 scope.go:110] "RemoveContainer" containerID="5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34"
	May 31 17:59:03 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:03.028829    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 17:59:07 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:07.257954    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:12 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:12.259092    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:14 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 17:59:14.028836    1298 scope.go:110] "RemoveContainer" containerID="5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34"
	May 31 17:59:14 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:14.029105    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 17:59:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:17.259707    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:22 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:22.260569    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:27 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:27.261561    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 17:59:29.028591    1298 scope.go:110] "RemoveContainer" containerID="5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34"
	May 31 17:59:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:29.028968    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 17:59:32 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:32.262258    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:37 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:37.262878    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 17:59:40.029234    1298 scope.go:110] "RemoveContainer" containerID="5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34"
	May 31 17:59:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:40.029509    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 17:59:42 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:42.263864    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:47 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:47.265357    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 17:59:52.266889    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-92zgx storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-92zgx storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-92zgx storage-provisioner: exit status 1 (51.794796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-92zgx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-92zgx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (284.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (287.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0531 17:56:05.001801    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:57:00.656937    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:57:15.749828    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:15.755079    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:15.765337    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:15.785542    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:15.825844    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:15.906159    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:16.066528    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:16.386763    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:17.027237    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:18.307658    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:20.868184    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:25.988916    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:36.229630    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:57:56.710077    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (4m45.76157906s)

                                                
                                                
-- stdout --
	* [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:56:04.761070  243743 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:56:04.761198  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761209  243743 out.go:309] Setting ErrFile to fd 2...
	I0531 17:56:04.761213  243743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:04.761320  243743 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:56:04.761604  243743 out.go:303] Setting JSON to false
	I0531 17:56:04.763369  243743 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5916,"bootTime":1654013849,"procs":806,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:56:04.763439  243743 start.go:125] virtualization: kvm guest
	I0531 17:56:04.765860  243743 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:56:04.767852  243743 notify.go:193] Checking for updates...
	I0531 17:56:04.767855  243743 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:56:04.769545  243743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:56:04.771229  243743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:04.772729  243743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:56:04.774183  243743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:56:04.776078  243743 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776263  243743 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776404  243743 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:04.776470  243743 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:56:04.818427  243743 docker.go:137] docker version: linux-20.10.16
	I0531 17:56:04.818525  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:04.933426  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.851840173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:04.933610  243743 docker.go:254] overlay module found
	I0531 17:56:04.936012  243743 out.go:177] * Using the docker driver based on user configuration
	I0531 17:56:04.937461  243743 start.go:284] selected driver: docker
	I0531 17:56:04.937479  243743 start.go:806] validating driver "docker" against <nil>
	I0531 17:56:04.937498  243743 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:56:04.938476  243743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:56:05.050928  243743 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:65 SystemTime:2022-05-31 17:56:04.970943421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:56:05.051044  243743 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:56:05.051282  243743 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:56:05.053473  243743 out.go:177] * Using Docker driver with the root privilege
	I0531 17:56:05.054914  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:05.054932  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:05.054948  243743 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054953  243743 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:56:05.054960  243743 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:56:05.054974  243743 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:05.056598  243743 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 17:56:05.058015  243743 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:56:05.059392  243743 out.go:177] * Pulling base image ...
	I0531 17:56:05.060693  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:05.060727  243743 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:56:05.060733  243743 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 17:56:05.060745  243743 cache.go:57] Caching tarball of preloaded images
	I0531 17:56:05.060946  243743 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 17:56:05.060966  243743 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 17:56:05.061099  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:05.061132  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json: {Name:mk012ae752926ff69a2c9dc59c259dc1c0bd12d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:05.116443  243743 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:56:05.116476  243743 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 17:56:05.116493  243743 cache.go:206] Successfully downloaded all kic artifacts
	I0531 17:56:05.116542  243743 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:05.116688  243743 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 125.335µs
	I0531 17:56:05.116714  243743 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:05.116812  243743 start.go:131] createHost starting for "" (driver="docker")
	I0531 17:56:05.119757  243743 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 17:56:05.119962  243743 start.go:165] libmachine.API.Create for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:05.119989  243743 client.go:168] LocalClient.Create starting
	I0531 17:56:05.120055  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
	I0531 17:56:05.120089  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120110  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120167  243743 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
	I0531 17:56:05.120184  243743 main.go:134] libmachine: Decoding PEM data...
	I0531 17:56:05.120193  243743 main.go:134] libmachine: Parsing certificate...
	I0531 17:56:05.120480  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 17:56:05.151452  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 17:56:05.151507  243743 network_create.go:272] running [docker network inspect embed-certs-20220531175604-6903] to gather additional debugging logs...
	I0531 17:56:05.151528  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903
	W0531 17:56:05.183019  243743 cli_runner.go:211] docker network inspect embed-certs-20220531175604-6903 returned with exit code 1
	I0531 17:56:05.183050  243743 network_create.go:275] error running [docker network inspect embed-certs-20220531175604-6903]: docker network inspect embed-certs-20220531175604-6903: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220531175604-6903
	I0531 17:56:05.183073  243743 network_create.go:277] output of [docker network inspect embed-certs-20220531175604-6903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220531175604-6903
	
	** /stderr **
	I0531 17:56:05.183114  243743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:05.218722  243743 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060e2d8] misses:0}
	I0531 17:56:05.218771  243743 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 17:56:05.218788  243743 network_create.go:115] attempt to create docker network embed-certs-20220531175604-6903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 17:56:05.218827  243743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220531175604-6903
	I0531 17:56:05.286551  243743 network_create.go:99] docker network embed-certs-20220531175604-6903 192.168.49.0/24 created
	I0531 17:56:05.286588  243743 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20220531175604-6903" container
	I0531 17:56:05.286654  243743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 17:56:05.321293  243743 cli_runner.go:164] Run: docker volume create embed-certs-20220531175604-6903 --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true
	I0531 17:56:05.353388  243743 oci.go:103] Successfully created a docker volume embed-certs-20220531175604-6903
	I0531 17:56:05.353454  243743 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 17:56:10.414315  243743 cli_runner.go:217] Completed: docker run --rm --name embed-certs-20220531175604-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --entrypoint /usr/bin/test -v embed-certs-20220531175604-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (5.060822572s)
	I0531 17:56:10.414344  243743 oci.go:107] Successfully prepared a docker volume embed-certs-20220531175604-6903
	I0531 17:56:10.414377  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:10.414397  243743 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 17:56:10.414445  243743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 17:56:17.822157  243743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220531175604-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.407658192s)
	I0531 17:56:17.822188  243743 kic.go:188] duration metric: took 7.407788 seconds to extract preloaded images to volume
	W0531 17:56:17.822300  243743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 17:56:17.822377  243743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 17:56:17.917371  243743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220531175604-6903 --name embed-certs-20220531175604-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220531175604-6903 --network embed-certs-20220531175604-6903 --ip 192.168.49.2 --volume embed-certs-20220531175604-6903:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 17:56:18.309484  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Running}}
	I0531 17:56:18.345151  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.377155  243743 cli_runner.go:164] Run: docker exec embed-certs-20220531175604-6903 stat /var/lib/dpkg/alternatives/iptables
	I0531 17:56:18.433758  243743 oci.go:247] the created container "embed-certs-20220531175604-6903" has a running status.
	I0531 17:56:18.433787  243743 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa...
	I0531 17:56:18.651045  243743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 17:56:18.737122  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.772057  243743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 17:56:18.772085  243743 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220531175604-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 17:56:18.848259  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:18.882465  243743 machine.go:88] provisioning docker machine ...
	I0531 17:56:18.882498  243743 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 17:56:18.882541  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:18.915976  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:18.916173  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:18.916203  243743 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 17:56:19.035081  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 17:56:19.035195  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.067189  243743 main.go:134] libmachine: Using SSH client type: native
	I0531 17:56:19.067362  243743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I0531 17:56:19.067394  243743 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:56:19.174458  243743 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:56:19.174496  243743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 17:56:19.174513  243743 ubuntu.go:177] setting up certificates
	I0531 17:56:19.174522  243743 provision.go:83] configureAuth start
	I0531 17:56:19.174563  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.206506  243743 provision.go:138] copyHostCerts
	I0531 17:56:19.206555  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 17:56:19.206563  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 17:56:19.206631  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 17:56:19.206727  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 17:56:19.206748  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 17:56:19.206785  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 17:56:19.206926  243743 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 17:56:19.206955  243743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 17:56:19.206994  243743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 17:56:19.207074  243743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 17:56:19.354064  243743 provision.go:172] copyRemoteCerts
	I0531 17:56:19.354118  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:56:19.354167  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.385431  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.465829  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 17:56:19.482372  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 17:56:19.498469  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 17:56:19.514414  243743 provision.go:86] duration metric: configureAuth took 339.882889ms
	I0531 17:56:19.514440  243743 ubuntu.go:193] setting minikube options for container-runtime
	I0531 17:56:19.514580  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:19.514593  243743 machine.go:91] provisioned docker machine in 632.108026ms
	I0531 17:56:19.514598  243743 client.go:171] LocalClient.Create took 14.394605814s
	I0531 17:56:19.514618  243743 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220531175604-6903" took 14.394651417s
	I0531 17:56:19.514628  243743 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 17:56:19.514633  243743 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:56:19.514668  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:56:19.514710  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.545303  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.630150  243743 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:56:19.632669  243743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 17:56:19.632694  243743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 17:56:19.632704  243743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 17:56:19.632709  243743 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 17:56:19.632717  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 17:56:19.632765  243743 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 17:56:19.632826  243743 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 17:56:19.632902  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 17:56:19.639100  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:19.655411  243743 start.go:309] post-start completed in 140.773803ms
	I0531 17:56:19.655734  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.685847  243743 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 17:56:19.686049  243743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:56:19.686083  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.714435  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.795018  243743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 17:56:19.798696  243743 start.go:134] duration metric: createHost completed in 14.681876121s
	I0531 17:56:19.798719  243743 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 14.682014704s
	I0531 17:56:19.798794  243743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 17:56:19.828548  243743 ssh_runner.go:195] Run: systemctl --version
	I0531 17:56:19.828596  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.828647  243743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 17:56:19.828701  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:19.859164  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.861276  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:19.956313  243743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 17:56:19.965586  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 17:56:19.973861  243743 docker.go:187] disabling docker service ...
	I0531 17:56:19.973905  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:56:19.988442  243743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:56:19.996566  243743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:56:20.078652  243743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:56:20.157370  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:56:20.165848  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:56:20.177829  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 17:56:20.190831  243743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:56:20.196751  243743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:56:20.202752  243743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:56:20.278651  243743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 17:56:20.337554  243743 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 17:56:20.337615  243743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 17:56:20.341007  243743 start.go:468] Will wait 60s for crictl version
	I0531 17:56:20.341061  243743 ssh_runner.go:195] Run: sudo crictl version
	I0531 17:56:20.365853  243743 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 17:56:20.365913  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.392463  243743 ssh_runner.go:195] Run: containerd --version
	I0531 17:56:20.420160  243743 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 17:56:20.421445  243743 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 17:56:20.449939  243743 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 17:56:20.452980  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.464025  243743 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 17:56:20.465273  243743 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 17:56:20.465321  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.488641  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.488661  243743 containerd.go:521] Images already preloaded, skipping extraction
	I0531 17:56:20.488697  243743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:56:20.509499  243743 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 17:56:20.509516  243743 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:56:20.509549  243743 ssh_runner.go:195] Run: sudo crictl info
	I0531 17:56:20.531616  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:20.531635  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:20.531651  243743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 17:56:20.531662  243743 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 17:56:20.531784  243743 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:56:20.531857  243743 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 17:56:20.531896  243743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 17:56:20.538217  243743 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:56:20.538272  243743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:56:20.544669  243743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 17:56:20.559115  243743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:56:20.572041  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 17:56:20.584477  243743 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 17:56:20.587199  243743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:56:20.595757  243743 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 17:56:20.595849  243743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 17:56:20.595883  243743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 17:56:20.595923  243743 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 17:56:20.595935  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt with IP's: []
	I0531 17:56:20.865002  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt ...
	I0531 17:56:20.865034  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.crt: {Name:mk983f1351054e3a81162f051295cd0c506fcbd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865199  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key ...
	I0531 17:56:20.865212  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key: {Name:mk405f0b57d526c28409acadcba4d956d1f0d13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:20.865300  243743 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 17:56:20.865315  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 17:56:21.075006  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 ...
	I0531 17:56:21.075031  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2: {Name:mk45ad740db68b95692d916b33d8e02d8dba1ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075247  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 ...
	I0531 17:56:21.075264  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2: {Name:mkf42168f7ee31852f4a02d1ef506d7d5a8f7b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.075382  243743 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt
	I0531 17:56:21.075465  243743 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key
	I0531 17:56:21.075522  243743 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 17:56:21.075541  243743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt with IP's: []
	I0531 17:56:21.134487  243743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt ...
	I0531 17:56:21.134511  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt: {Name:mk1893a7aa78a1763283fdce57e297466ab59148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134682  243743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key ...
	I0531 17:56:21.134698  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key: {Name:mk25a32f5f241a2369c40634bda3a1e4c75a34a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:21.134919  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 17:56:21.134955  243743 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 17:56:21.134967  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:56:21.134989  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 17:56:21.135011  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:56:21.135032  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 17:56:21.135067  243743 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 17:56:21.135606  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 17:56:21.153391  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 17:56:21.169555  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:56:21.185845  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 17:56:21.201921  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:56:21.217951  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:56:21.234159  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:56:21.250138  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 17:56:21.266294  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 17:56:21.282188  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:56:21.298126  243743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 17:56:21.313822  243743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 17:56:21.325312  243743 ssh_runner.go:195] Run: openssl version
	I0531 17:56:21.329803  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 17:56:21.336618  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339408  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.339449  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 17:56:21.343890  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 17:56:21.350793  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:56:21.357753  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360608  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.360649  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:56:21.365192  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:56:21.372353  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 17:56:21.379136  243743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382027  243743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.382074  243743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 17:56:21.386461  243743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 17:56:21.393159  243743 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:56:21.393235  243743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 17:56:21.393282  243743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:56:21.417667  243743 cri.go:87] found id: ""
	I0531 17:56:21.417716  243743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:56:21.424169  243743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:56:21.430469  243743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 17:56:21.430524  243743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:56:21.436945  243743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:56:21.436981  243743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 17:56:21.673625  243743 out.go:204]   - Generating certificates and keys ...
	I0531 17:56:24.424606  243743 out.go:204]   - Booting up control plane ...
	I0531 17:56:35.960123  243743 out.go:204]   - Configuring RBAC rules ...
	I0531 17:56:36.372484  243743 cni.go:95] Creating CNI manager for ""
	I0531 17:56:36.372507  243743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:56:36.374314  243743 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 17:56:36.375646  243743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 17:56:36.378972  243743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 17:56:36.378987  243743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 17:56:36.391818  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 17:56:37.134192  243743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:56:37.134265  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.134293  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.216998  243743 ops.go:34] apiserver oom_adj: -16
	I0531 17:56:37.217013  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:37.772320  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.272340  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:38.772405  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.271965  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:39.772232  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.272312  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:40.772850  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.271928  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:41.772325  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.272975  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:42.772307  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.272387  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:43.772248  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:44.272332  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:44.771976  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.272400  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:45.772386  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.272935  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:46.772442  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.272553  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:47.771899  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.272016  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:48.772018  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.272518  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.772710  243743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:56:49.826843  243743 kubeadm.go:1045] duration metric: took 12.692623892s to wait for elevateKubeSystemPrivileges.
	I0531 17:56:49.826874  243743 kubeadm.go:397] StartCluster complete in 28.433719659s
	I0531 17:56:49.826894  243743 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:49.826995  243743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:56:49.829203  243743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:50.344768  243743 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 17:56:50.344838  243743 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 17:56:50.346548  243743 out.go:177] * Verifying Kubernetes components...
	I0531 17:56:50.344908  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:56:50.345125  243743 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:56:50.345145  243743 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 17:56:50.347978  243743 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348002  243743 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.348010  243743 addons.go:165] addon storage-provisioner should already be in state true
	I0531 17:56:50.348024  243743 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 17:56:50.348045  243743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 17:56:50.348060  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.347983  243743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:56:50.348416  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.348631  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.389959  243743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:56:50.391301  243743 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.391319  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:56:50.391356  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.404254  243743 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 17:56:50.404284  243743 addons.go:165] addon default-storageclass should already be in state true
	I0531 17:56:50.404312  243743 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 17:56:50.404824  243743 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 17:56:50.426361  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.436141  243743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:56:50.437624  243743 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 17:56:50.442307  243743 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.442324  243743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:56:50.442358  243743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 17:56:50.480148  243743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 17:56:50.522400  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:56:50.716771  243743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:56:50.801813  243743 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 17:56:50.954578  243743 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 17:56:50.955732  243743 addons.go:417] enableAddons completed in 610.588239ms
	I0531 17:56:52.444126  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:54.943796  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:57.443053  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:56:59.443687  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:01.943080  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:03.944340  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:06.443747  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:08.942997  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:10.943575  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:13.443323  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:15.443608  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:17.944142  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:20.443463  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:22.443654  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:24.444191  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:26.943262  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:28.943501  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:30.943907  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:32.944301  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:35.443484  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:37.943999  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:40.443245  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:42.444152  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:44.944382  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:47.443968  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:49.445784  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:51.944333  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:54.443331  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:56.443483  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:57:58.942972  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:00.944139  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:03.443519  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:05.943200  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:07.943995  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:10.443423  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:12.943328  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:14.943958  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:17.443050  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:19.443916  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:21.943559  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:23.944354  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:26.443517  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:28.942984  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:30.943939  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:32.944212  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:35.443909  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:37.944150  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:40.444242  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:42.943610  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:45.443056  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:47.443392  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:49.943222  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:51.944006  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:54.443561  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:56.443946  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:58:58.942966  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:00.943933  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:03.443196  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:05.443952  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:07.943843  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:09.944329  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:12.444402  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:14.943225  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:16.944023  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:19.443638  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:21.943242  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:23.944004  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:25.944401  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:28.444502  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:30.943797  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:32.944129  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:35.443699  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:37.943489  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:39.943619  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:41.943950  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:44.443800  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:46.944150  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:49.443556  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:51.443883  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:53.444259  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:55.943802  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 17:59:58.443685  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:00.943017  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:02.943945  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:05.443693  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:07.943369  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:09.943982  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:12.443283  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:14.443918  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:16.444074  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:18.942796  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:20.944048  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:23.443965  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:25.943916  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:28.443449  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	* 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531175604-6903
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531175604-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f",
	        "Created": "2022-05-31T17:56:17.948185818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:56:18.300730024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f-json.log",
	        "Name": "/embed-certs-20220531175604-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531175604-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531175604-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531175604-6903",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531175604-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531175604-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11a6b63b5abe8f9c9428988cf4db6f03035277ca15e61a9acec7f8823d618698",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/11a6b63b5abe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531175604-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8a0a6250b5",
	                        "embed-certs-20220531175604-6903"
	                    ],
	                    "NetworkID": "810e286ea2469d855f00ec56445da0705b1ca1a44b439a6e099264f06730a27d",
	                    "EndpointID": "d7bf905c93b04663aeaeb7c5b125cdceaf3e7b5b400379603ca717422c8036ad",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220531175604-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:47 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| start   | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:44 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                     |                     |
	|         | --enable-default-cni=true                                  |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	|         | pgrep -a kubelet                                           |                                                |         |                |                     |                     |
	| start   | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:49 UTC |
	|         | --memory=2048                                              |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                               |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	| ssh     | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |                |                     |                     |
	| logs    | calico-20220531174030-6903                                 | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220531174030-6903                              | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20220531175323-6903      | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |         |                |                     |                     |
	|         | --keep-context=false                                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:00:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:00:31.855034  253603 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:00:31.855128  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855137  253603 out.go:309] Setting ErrFile to fd 2...
	I0531 18:00:31.855169  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855275  253603 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:00:31.855500  253603 out.go:303] Setting JSON to false
	I0531 18:00:31.857002  253603 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6183,"bootTime":1654013849,"procs":755,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:00:31.857065  253603 start.go:125] virtualization: kvm guest
	I0531 18:00:31.859650  253603 out.go:177] * [newest-cni-20220531175602-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:00:31.861106  253603 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:00:31.861145  253603 notify.go:193] Checking for updates...
	I0531 18:00:31.863620  253603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:00:31.865010  253603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:31.866391  253603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:00:31.867875  253603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:00:31.871501  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:31.872091  253603 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:00:31.913476  253603 docker.go:137] docker version: linux-20.10.16
	I0531 18:00:31.913607  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.012796  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:31.941581138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.012892  253603 docker.go:254] overlay module found
	I0531 18:00:32.015694  253603 out.go:177] * Using the docker driver based on existing profile
	I0531 18:00:32.016948  253603 start.go:284] selected driver: docker
	I0531 18:00:32.016961  253603 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.017071  253603 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:00:32.017980  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.118816  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:32.047560918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.119131  253603 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 18:00:32.119167  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:32.119175  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:32.119195  253603 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119208  253603 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119215  253603 start_flags.go:306] config:
	{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.122424  253603 out.go:177] * Starting control plane node newest-cni-20220531175602-6903 in cluster newest-cni-20220531175602-6903
	I0531 18:00:32.123755  253603 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:00:32.125291  253603 out.go:177] * Pulling base image ...
	I0531 18:00:32.126765  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:32.126808  253603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:00:32.126822  253603 cache.go:57] Caching tarball of preloaded images
	I0531 18:00:32.126856  253603 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:00:32.127020  253603 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:00:32.127034  253603 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:00:32.127170  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.176155  253603 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:00:32.176180  253603 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:00:32.176199  253603 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:00:32.176233  253603 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:00:32.176322  253603 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 69.182µs
	I0531 18:00:32.176340  253603 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:00:32.176344  253603 fix.go:55] fixHost starting: 
	I0531 18:00:32.176560  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.209761  253603 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state=Stopped err=<nil>
	W0531 18:00:32.209791  253603 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:00:32.212875  253603 out.go:177] * Restarting existing docker container for "newest-cni-20220531175602-6903" ...
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.214225  253603 cli_runner.go:164] Run: docker start newest-cni-20220531175602-6903
	I0531 18:00:32.577327  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.610657  253603 kic.go:416] container "newest-cni-20220531175602-6903" state is running.
	I0531 18:00:32.611011  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:32.643675  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.643905  253603 machine.go:88] provisioning docker machine ...
	I0531 18:00:32.643932  253603 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 18:00:32.643983  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:32.674555  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:32.674809  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:32.674837  253603 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 18:00:32.675642  253603 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46432->127.0.0.1:49427: read: connection reset by peer
	I0531 18:00:35.795562  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 18:00:35.795625  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:35.826982  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:35.827166  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:35.827189  253603 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:00:35.938582  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:00:35.938614  253603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:00:35.938689  253603 ubuntu.go:177] setting up certificates
	I0531 18:00:35.938700  253603 provision.go:83] configureAuth start
	I0531 18:00:35.938739  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:35.970778  253603 provision.go:138] copyHostCerts
	I0531 18:00:35.970836  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:00:35.970855  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:00:35.970915  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:00:35.971070  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:00:35.971088  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:00:35.971129  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:00:35.971236  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:00:35.971254  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:00:35.971287  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:00:35.971355  253603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 18:00:36.142238  253603 provision.go:172] copyRemoteCerts
	I0531 18:00:36.142291  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:00:36.142320  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.173472  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.254066  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:00:36.271055  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:00:36.287105  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:00:36.302927  253603 provision.go:86] duration metric: configureAuth took 364.217481ms
	I0531 18:00:36.302948  253603 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:00:36.303122  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:36.303134  253603 machine.go:91] provisioned docker machine in 3.659215237s
	I0531 18:00:36.303168  253603 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 18:00:36.303175  253603 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:00:36.303216  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:00:36.303261  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.335634  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.418002  253603 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:00:36.420669  253603 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:00:36.420693  253603 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:00:36.420701  253603 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:00:36.420706  253603 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:00:36.420719  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:00:36.420765  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:00:36.420825  253603 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:00:36.420897  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:00:36.427208  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:36.443819  253603 start.go:309] post-start completed in 140.639246ms
	I0531 18:00:36.443888  253603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:00:36.443930  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.477971  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.555314  253603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:00:36.559129  253603 fix.go:57] fixHost completed within 4.38277864s
	I0531 18:00:36.559171  253603 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 4.382836668s
	I0531 18:00:36.559246  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:36.590986  253603 ssh_runner.go:195] Run: systemctl --version
	I0531 18:00:36.591023  253603 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:00:36.591084  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.591027  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.624550  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.625023  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.722476  253603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:00:36.732794  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:00:36.741236  253603 docker.go:187] disabling docker service ...
	I0531 18:00:36.741281  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:00:36.757377  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:00:36.765762  253603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:00:36.850081  253603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.930380  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:00:36.938984  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:00:36.951805  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:00:36.964223  253603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:00:36.970217  253603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:00:36.976123  253603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:00:37.050759  253603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:00:37.133255  253603 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:00:37.133326  253603 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:00:37.136650  253603 start.go:468] Will wait 60s for crictl version
	I0531 18:00:37.136705  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:37.162540  253603 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:00:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.209660  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:48.232631  253603 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:00:48.232687  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.260476  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.288516  253603 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:00:48.289983  253603 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:00:48.321110  253603 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 18:00:48.324362  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.335260  253603 out.go:177]   - kubelet.network-plugin=cni
	I0531 18:00:48.336944  253603 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 18:00:48.338457  253603 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	cd37d7e6c7fad       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   ff9f2b9e4b710
	2288455dcf90c       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   ff9f2b9e4b710
	2dd4c6e62c848       4c03754524064       4 minutes ago        Running             kube-proxy                0                   9882491c2eb7b
	bce895f043845       8fa62c12256df       4 minutes ago        Running             kube-apiserver            0                   6b4c83a2bc23d
	93653e4eba8ad       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   b7f9cd5df9ca2
	8878d3b54661f       595f327f224a4       4 minutes ago        Running             kube-scheduler            0                   b8e30e5a630dd
	55beac89e1876       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   0                   56c483605d4d5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:00:51 UTC. --
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024469726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024489044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024555152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024564450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024672854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7 pid=1724 runtime=io.containerd.runc.v2
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.024768014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9882491c2eb7b0f5b9adcf425d3c1df97b78990664ea08bc91faa5b9ac4aea4d pid=1725 runtime=io.containerd.runc.v2
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.114818429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nvktf,Uid:a3c917bd-93a0-40b6-85c5-7ea637a1aaac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9882491c2eb7b0f5b9adcf425d3c1df97b78990664ea08bc91faa5b9ac4aea4d\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.117534119Z" level=info msg="CreateContainer within sandbox \"9882491c2eb7b0f5b9adcf425d3c1df97b78990664ea08bc91faa5b9ac4aea4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.132774721Z" level=info msg="CreateContainer within sandbox \"9882491c2eb7b0f5b9adcf425d3c1df97b78990664ea08bc91faa5b9ac4aea4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.133266994Z" level=info msg="StartContainer for \"2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.206387458Z" level=info msg="StartContainer for \"2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843\" returns successfully"
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.317249906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-jrlsl,Uid:c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.319979514Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.332321003Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.332711703Z" level=info msg="StartContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\""
	May 31 17:56:50 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:56:50.607849149Z" level=info msg="StartContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\" returns successfully"
	May 31 17:59:31 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:31.773027782Z" level=error msg="collecting metrics for 2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6" error="cgroups: cgroup deleted: unknown"
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.495630137Z" level=info msg="shim disconnected" id=2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.495682907Z" level=warning msg="cleaning up after shim disconnected" id=2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6 namespace=k8s.io
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.495696016Z" level=info msg="cleaning up dead shim"
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.505955099Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:59:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2084 runtime=io.containerd.runc.v2\n"
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.622242627Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.635454375Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\""
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.635902869Z" level=info msg="StartContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\""
	May 31 17:59:33 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T17:59:33.723282370Z" level=info msg="StartContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531175604-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531175604-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531175604-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:56:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531175604-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:00:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 17:56:48 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 17:56:48 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 17:56:48 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 17:56:48 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220531175604-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                9377e8f5-ae2b-465c-b601-bd790903b8eb
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220531175604-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-jrlsl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220531175604-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-embed-certs-20220531175604-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-nvktf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220531175604-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  4m21s (x5 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x3 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x3 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b] <==
	* {"level":"info","ts":"2022-05-31T17:56:30.928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-05-31T17:56:30.929Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220531175604-6903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  18:00:51 up  1:43,  0 users,  load average: 0.81, 0.98, 1.52
	Linux embed-certs-20220531175604-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808] <==
	* I0531 17:56:33.502029       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:56:33.502098       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:56:33.502115       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:56:33.502122       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:56:33.502139       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:56:33.509858       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:56:34.382390       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:56:34.382418       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:56:34.386588       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:56:34.389618       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:56:34.389640       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:56:34.731605       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:56:34.758176       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:56:34.832687       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:56:34.837596       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 17:56:34.838344       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:56:34.841228       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:56:35.526888       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:56:36.183478       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:56:36.189560       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:56:36.198454       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:56:41.306447       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:56:49.027629       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:56:49.828447       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:56:50.264822       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad] <==
	* I0531 17:56:48.924684       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:56:48.924715       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:56:48.924965       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:56:48.926974       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:56:48.931979       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:56:48.976098       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:56:49.021334       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 17:56:49.033089       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jrlsl"
	I0531 17:56:49.034599       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nvktf"
	I0531 17:56:49.074109       1 shared_informer.go:247] Caches are synced for taint 
	I0531 17:56:49.074203       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 17:56:49.074226       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 17:56:49.074316       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220531175604-6903. Assuming now as a timestamp.
	I0531 17:56:49.074352       1 event.go:294] "Event occurred" object="embed-certs-20220531175604-6903" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220531175604-6903 event: Registered Node embed-certs-20220531175604-6903 in Controller"
	I0531 17:56:49.074384       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0531 17:56:49.133279       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.136437       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.552201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591219       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591238       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:56:49.830463       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:56:49.852290       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:56:49.928800       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z8m4h"
	I0531 17:56:49.932514       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-w2s2k"
	I0531 17:56:50.002323       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z8m4h"
	
	* 
	* ==> kube-proxy [2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843] <==
	* I0531 17:56:50.241573       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:56:50.241684       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:56:50.241784       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:56:50.262124       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:56:50.262154       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:56:50.262162       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:56:50.262174       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:56:50.262538       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:56:50.263031       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:56:50.263090       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:56:50.263054       1 config.go:317] "Starting service config controller"
	I0531 17:56:50.263166       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:56:50.363889       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:56:50.363890       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e] <==
	* E0531 17:56:33.431718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:33.431722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:56:33.431754       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:56:33.431760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:56:33.431778       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.431785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.502370       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:33.503086       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:56:33.503381       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.503415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.503867       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:56:33.503957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:56:34.288126       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:56:34.288171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:56:34.331364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:34.331388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:56:34.365507       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:56:34.365532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:56:34.370569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:34.370601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:34.434332       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:56:34.434358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:56:34.741075       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:34.741113       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 17:56:36.429128       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:00:51 UTC. --
	May 31 17:58:56 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:58:56.504401    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:01 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:01.505333    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:06 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:06.506679    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:11 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:11.508040    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:16 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:16.509438    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:21 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:21.510079    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:26 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:26.511575    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:31 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:31.513005    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:33 embed-certs-20220531175604-6903 kubelet[1306]: I0531 17:59:33.619888    1306 scope.go:110] "RemoveContainer" containerID="2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6"
	May 31 17:59:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:36.514305    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:41.515684    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:46.517262    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:51.518813    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 17:59:56 embed-certs-20220531175604-6903 kubelet[1306]: E0531 17:59:56.520185    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:01 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:01.521244    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:06 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:06.522373    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:11 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:11.523489    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:16 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:16.524260    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:21 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:21.525699    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:26 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:26.526966    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:31 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:31.528068    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:36.529192    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:41.530338    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:46.531738    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:00:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:00:51.532601    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-w2s2k storage-provisioner: exit status 1 (56.133226ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-w2s2k" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-w2s2k storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (287.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d43490d8-54e7-4006-96b6-59ac3fb1f770] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 17:58:37.671035    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:58:57.610815    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:59:08.049238    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:59:11.143002    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:59:25.073298    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:59:48.440056    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
start_stop_delete_test.go:198: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2022-05-31 18:06:16.088002888 +0000 UTC m=+3227.613149144
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context no-preload-20220531175323-6903 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dw8dv (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-dw8dv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  45s (x8 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context no-preload-20220531175323-6903 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531175323-6903
helpers_test.go:235: (dbg) docker inspect no-preload-20220531175323-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d",
	        "Created": "2022-05-31T17:53:25.199469079Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230732,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:53:25.538304199Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d-json.log",
	        "Name": "/no-preload-20220531175323-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531175323-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531175323-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531175323-6903",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531175323-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531175323-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6413ea608901d520cb420be1567e8fbd6f13d85f29fc8ae60c4095bc5f68676",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6413ea60890",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531175323-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4f33d13fefc",
	                        "no-preload-20220531175323-6903"
	                    ],
	                    "NetworkID": "b2391a84ebd8e16dd2e9aca80777d6d03045cffc9cfc8290f45a61a1473c3244",
	                    "EndpointID": "81cd7594f26487ced42b2407b71e68ba6220c3d831ffa8d20b6ab5ac89aa38f6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220531175323-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p calico-20220531174030-6903                              | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20220531175323-6903      | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |         |                |                     |                     |
	|         | --keep-context=false                                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:00:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:00:31.855034  253603 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:00:31.855128  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855137  253603 out.go:309] Setting ErrFile to fd 2...
	I0531 18:00:31.855169  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855275  253603 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:00:31.855500  253603 out.go:303] Setting JSON to false
	I0531 18:00:31.857002  253603 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6183,"bootTime":1654013849,"procs":755,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:00:31.857065  253603 start.go:125] virtualization: kvm guest
	I0531 18:00:31.859650  253603 out.go:177] * [newest-cni-20220531175602-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:00:31.861106  253603 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:00:31.861145  253603 notify.go:193] Checking for updates...
	I0531 18:00:31.863620  253603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:00:31.865010  253603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:31.866391  253603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:00:31.867875  253603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:00:31.871501  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:31.872091  253603 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:00:31.913476  253603 docker.go:137] docker version: linux-20.10.16
	I0531 18:00:31.913607  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.012796  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:31.941581138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.012892  253603 docker.go:254] overlay module found
	I0531 18:00:32.015694  253603 out.go:177] * Using the docker driver based on existing profile
	I0531 18:00:32.016948  253603 start.go:284] selected driver: docker
	I0531 18:00:32.016961  253603 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.017071  253603 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:00:32.017980  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.118816  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:32.047560918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.119131  253603 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 18:00:32.119167  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:32.119175  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:32.119195  253603 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119208  253603 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119215  253603 start_flags.go:306] config:
	{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.122424  253603 out.go:177] * Starting control plane node newest-cni-20220531175602-6903 in cluster newest-cni-20220531175602-6903
	I0531 18:00:32.123755  253603 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:00:32.125291  253603 out.go:177] * Pulling base image ...
	I0531 18:00:32.126765  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:32.126808  253603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:00:32.126822  253603 cache.go:57] Caching tarball of preloaded images
	I0531 18:00:32.126856  253603 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:00:32.127020  253603 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:00:32.127034  253603 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:00:32.127170  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.176155  253603 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:00:32.176180  253603 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:00:32.176199  253603 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:00:32.176233  253603 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:00:32.176322  253603 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 69.182µs
	I0531 18:00:32.176340  253603 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:00:32.176344  253603 fix.go:55] fixHost starting: 
	I0531 18:00:32.176560  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.209761  253603 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state=Stopped err=<nil>
	W0531 18:00:32.209791  253603 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:00:32.212875  253603 out.go:177] * Restarting existing docker container for "newest-cni-20220531175602-6903" ...
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.214225  253603 cli_runner.go:164] Run: docker start newest-cni-20220531175602-6903
	I0531 18:00:32.577327  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.610657  253603 kic.go:416] container "newest-cni-20220531175602-6903" state is running.
	I0531 18:00:32.611011  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:32.643675  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.643905  253603 machine.go:88] provisioning docker machine ...
	I0531 18:00:32.643932  253603 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 18:00:32.643983  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:32.674555  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:32.674809  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:32.674837  253603 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 18:00:32.675642  253603 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46432->127.0.0.1:49427: read: connection reset by peer
	I0531 18:00:35.795562  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 18:00:35.795625  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:35.826982  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:35.827166  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:35.827189  253603 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:00:35.938582  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:00:35.938614  253603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:00:35.938689  253603 ubuntu.go:177] setting up certificates
	I0531 18:00:35.938700  253603 provision.go:83] configureAuth start
	I0531 18:00:35.938739  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:35.970778  253603 provision.go:138] copyHostCerts
	I0531 18:00:35.970836  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:00:35.970855  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:00:35.970915  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:00:35.971070  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:00:35.971088  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:00:35.971129  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:00:35.971236  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:00:35.971254  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:00:35.971287  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:00:35.971355  253603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 18:00:36.142238  253603 provision.go:172] copyRemoteCerts
	I0531 18:00:36.142291  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:00:36.142320  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.173472  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.254066  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:00:36.271055  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:00:36.287105  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:00:36.302927  253603 provision.go:86] duration metric: configureAuth took 364.217481ms
	I0531 18:00:36.302948  253603 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:00:36.303122  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:36.303134  253603 machine.go:91] provisioned docker machine in 3.659215237s
	I0531 18:00:36.303168  253603 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 18:00:36.303175  253603 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:00:36.303216  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:00:36.303261  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.335634  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.418002  253603 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:00:36.420669  253603 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:00:36.420693  253603 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:00:36.420701  253603 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:00:36.420706  253603 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:00:36.420719  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:00:36.420765  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:00:36.420825  253603 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:00:36.420897  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:00:36.427208  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:36.443819  253603 start.go:309] post-start completed in 140.639246ms
	I0531 18:00:36.443888  253603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:00:36.443930  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.477971  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.555314  253603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:00:36.559129  253603 fix.go:57] fixHost completed within 4.38277864s
	I0531 18:00:36.559171  253603 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 4.382836668s
	I0531 18:00:36.559246  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:36.590986  253603 ssh_runner.go:195] Run: systemctl --version
	I0531 18:00:36.591023  253603 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:00:36.591084  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.591027  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.624550  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.625023  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.722476  253603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:00:36.732794  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:00:36.741236  253603 docker.go:187] disabling docker service ...
	I0531 18:00:36.741281  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:00:36.757377  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:00:36.765762  253603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:00:36.850081  253603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.930380  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:00:36.938984  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:00:36.951805  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:00:36.964223  253603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:00:36.970217  253603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:00:36.976123  253603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:00:37.050759  253603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:00:37.133255  253603 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:00:37.133326  253603 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:00:37.136650  253603 start.go:468] Will wait 60s for crictl version
	I0531 18:00:37.136705  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:37.162540  253603 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:00:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.209660  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:48.232631  253603 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:00:48.232687  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.260476  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.288516  253603 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:00:48.289983  253603 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:00:48.321110  253603 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 18:00:48.324362  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.335260  253603 out.go:177]   - kubelet.network-plugin=cni
	I0531 18:00:48.336944  253603 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 18:00:48.338457  253603 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 
	I0531 18:00:48.339824  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:48.339884  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.363681  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.363700  253603 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:00:48.363745  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.385839  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.385856  253603 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:00:48.385893  253603 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:00:48.408057  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:48.408077  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:48.408091  253603 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 18:00:48.408103  253603 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531175602-6903 NodeName:newest-cni-20220531175602-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:00:48.408230  253603 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220531175602-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:00:48.408307  253603 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531175602-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:00:48.408350  253603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:00:48.414874  253603 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:00:48.414928  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:00:48.421138  253603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0531 18:00:48.433792  253603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:00:48.447663  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0531 18:00:48.459853  253603 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:00:48.462496  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.470850  253603 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903 for IP: 192.168.58.2
	I0531 18:00:48.470935  253603 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:00:48.470970  253603 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:00:48.471030  253603 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key
	I0531 18:00:48.471080  253603 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041
	I0531 18:00:48.471114  253603 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key
	I0531 18:00:48.471247  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:00:48.471280  253603 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:00:48.471292  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:00:48.471322  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:00:48.471348  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:00:48.471369  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:00:48.471406  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:48.471990  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:00:48.487996  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:00:48.504050  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:00:48.520129  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:00:48.536197  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:00:48.551773  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:00:48.567698  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:00:48.583534  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:00:48.599284  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:00:48.615488  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:00:48.631736  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:00:48.648044  253603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:00:48.659819  253603 ssh_runner.go:195] Run: openssl version
	I0531 18:00:48.664514  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:00:48.671684  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674554  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674592  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.678953  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:00:48.685183  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:00:48.691850  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694734  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694775  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.699108  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:00:48.705843  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:00:48.713797  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716588  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716628  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.720988  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:00:48.727223  253603 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:48.727350  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:00:48.727391  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:48.751975  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:48.751998  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:48.752009  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:48.752025  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:48.752038  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:48.752051  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:48.752060  253603 cri.go:87] found id: ""
	I0531 18:00:48.752094  253603 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:00:48.763086  253603 cri.go:114] JSON = null
	W0531 18:00:48.763128  253603 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:00:48.763217  253603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:00:48.769482  253603 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:00:48.769502  253603 kubeadm.go:626] restartCluster start
	I0531 18:00:48.769537  253603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:00:48.775590  253603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.776475  253603 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531175602-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:48.777108  253603 kubeconfig.go:127] "newest-cni-20220531175602-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:00:48.777968  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:00:48.779498  253603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:00:48.785488  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.785519  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:48.793052  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.993429  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.993482  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.001612  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.193914  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.193974  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.202307  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.393581  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.393647  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.401876  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.594165  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.594228  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.602448  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.793873  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.793934  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.802272  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.993549  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.993606  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.002105  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.193422  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.193478  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.201805  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.394099  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.394197  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.402406  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.593662  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.593737  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.602754  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.794037  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.794083  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.803034  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.993253  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.993322  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.002295  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.193608  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.193667  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.201663  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.393968  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.394033  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.402169  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.593519  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.593576  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.602288  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.793534  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.793598  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.803943  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.803964  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.803995  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.812522  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.812554  253603 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:00:51.812560  253603 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:00:51.812574  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:00:51.812615  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:51.839954  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:51.839976  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:51.839982  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:51.839989  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:51.839994  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:51.840001  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:51.840013  253603 cri.go:87] found id: ""
	I0531 18:00:51.840018  253603 cri.go:232] Stopping containers: [776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b]
	I0531 18:00:51.840059  253603 ssh_runner.go:195] Run: which crictl
	I0531 18:00:51.842973  253603 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b
	I0531 18:00:51.869603  253603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:00:51.880644  253603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:00:51.887664  253603 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:59 /etc/kubernetes/scheduler.conf
	
	I0531 18:00:51.887720  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:00:51.894538  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:00:51.901534  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.908371  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.908424  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.917592  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:00:51.925101  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.925151  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:00:51.931258  253603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937908  253603 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937925  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:51.981409  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.730818  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.866579  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.918070  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.960507  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:00:52.960554  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.469301  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.969201  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.469096  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.968777  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.468873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.968873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.468973  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.969026  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.468917  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.968887  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.469411  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.969742  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:59.011037  253603 api_server.go:71] duration metric: took 6.050532367s to wait for apiserver process to appear ...
	I0531 18:00:59.011067  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:00:59.011079  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:00:59.011494  253603 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0531 18:00:59.512207  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.105106  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:01:02.105133  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:01:02.512478  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.516889  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:02.516910  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.012313  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.016705  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:03.016731  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.512288  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.516555  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:03.522009  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:03.522027  253603 api_server.go:130] duration metric: took 4.510954896s to wait for apiserver health ...
	I0531 18:01:03.522036  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:01:03.522043  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:01:03.524134  253603 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:01:03.525439  253603 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:01:03.529095  253603 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:01:03.529112  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:01:03.541449  253603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:01:04.388379  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.394833  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.394868  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394878  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.394887  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.394895  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.394908  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.394914  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.394927  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.394933  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394938  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394945  253603 system_pods.go:74] duration metric: took 6.541942ms to wait for pod list to return data ...
	I0531 18:01:04.394952  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.397297  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.397318  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.397328  253603 node_conditions.go:105] duration metric: took 2.369222ms to run NodePressure ...
	I0531 18:01:04.397343  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:01:04.522242  253603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:01:04.528860  253603 ops.go:34] apiserver oom_adj: -16
	I0531 18:01:04.528888  253603 kubeadm.go:630] restartCluster took 15.759378612s
	I0531 18:01:04.528897  253603 kubeadm.go:397] StartCluster complete in 15.801681788s
	I0531 18:01:04.528917  253603 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.529033  253603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:01:04.530679  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.533767  253603 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531175602-6903" rescaled to 1
	I0531 18:01:04.533818  253603 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:01:04.533838  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:01:04.536326  253603 out.go:177] * Verifying Kubernetes components...
	I0531 18:01:04.533856  253603 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 18:01:04.534015  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:01:04.537649  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:04.537683  253603 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537700  253603 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537715  253603 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.537721  253603 addons.go:165] addon storage-provisioner should already be in state true
	W0531 18:01:04.537727  253603 addons.go:165] addon metrics-server should already be in state true
	I0531 18:01:04.537767  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537777  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537687  253603 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537727  253603 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537814  253603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531175602-6903"
	W0531 18:01:04.537839  253603 addons.go:165] addon dashboard should already be in state true
	I0531 18:01:04.537886  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.538099  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538258  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538288  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538354  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.582251  253603 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:01:04.583780  253603 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.585078  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:01:04.585101  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:01:04.585148  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.586519  253603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:01:04.588458  253603 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.589819  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:01:04.589835  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:01:04.589870  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.588540  253603 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.589914  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:01:04.589608  253603 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.589994  253603 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:01:04.590025  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.590456  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.589970  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.622440  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:01:04.622511  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:01:04.622642  253603 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 18:01:04.633508  253603 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.633529  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:01:04.633581  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.633878  253603 api_server.go:71] duration metric: took 100.025723ms to wait for apiserver process to appear ...
	I0531 18:01:04.633902  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:01:04.633915  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:04.636308  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.639626  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:04.640522  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:04.640542  253603 api_server.go:130] duration metric: took 6.632874ms to wait for apiserver health ...
	I0531 18:01:04.640552  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.641487  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.650123  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.651235  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.651429  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651499  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.651514  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.651525  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.651537  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.651547  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.651557  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.651565  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651574  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651580  253603 system_pods.go:74] duration metric: took 11.022992ms to wait for pod list to return data ...
	I0531 18:01:04.651588  253603 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:01:04.653854  253603 default_sa.go:45] found service account: "default"
	I0531 18:01:04.653878  253603 default_sa.go:55] duration metric: took 2.284188ms for default service account to be created ...
	I0531 18:01:04.653893  253603 kubeadm.go:572] duration metric: took 120.041989ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 18:01:04.653922  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.656488  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.656514  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.656527  253603 node_conditions.go:105] duration metric: took 2.599307ms to run NodePressure ...
	I0531 18:01:04.656538  253603 start.go:213] waiting for startup goroutines ...
	I0531 18:01:04.673010  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.728342  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:01:04.728368  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:01:04.736428  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:01:04.736451  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:01:04.742828  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:01:04.742852  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:01:04.746024  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.750055  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:01:04.750076  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:01:04.758284  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.758304  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:01:04.801922  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:01:04.801947  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:01:04.802275  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.807930  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.820976  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:01:04.821004  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:01:04.911836  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:01:04.911866  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:01:04.931751  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:01:04.931779  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:01:05.022410  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:01:05.022437  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:01:05.105670  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:01:05.105701  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:01:05.123433  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.123460  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:01:05.202647  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.305415  253603 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531175602-6903"
	I0531 18:01:05.471026  253603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:01:05.472226  253603 addons.go:417] enableAddons completed in 938.375737ms
	I0531 18:01:05.510490  253603 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0531 18:01:05.512509  253603 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531175602-6903" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ef96bc146e16f       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   df6575866fdea
	b12fda9e12e52       4c03754524064       12 minutes ago      Running             kube-proxy                0                   988de4837f61f
	91afec248cd26       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   8ae5c296424b2
	2d2cb82735b88       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   ac6eb1a1a0685
	0d1755990bfb1       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   dc54f5b9ebd0e
	c25ff47b27774       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   d852e12f002ef
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 18:06:17 UTC. --
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.540719640Z" level=warning msg="cleaning up after shim disconnected" id=7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb namespace=k8s.io
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.540735610Z" level=info msg="cleaning up dead shim"
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.549493789Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:59:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n"
	May 31 17:59:40 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:40.476348155Z" level=info msg="RemoveContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\""
	May 31 17:59:40 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:40.480884825Z" level=info msg="RemoveContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\" returns successfully"
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.832949086Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.848216078Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.848792123Z" level=info msg="StartContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.917342023Z" level=info msg="StartContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\" returns successfully"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142401191Z" level=info msg="shim disconnected" id=52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142467418Z" level=warning msg="cleaning up after shim disconnected" id=52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21 namespace=k8s.io
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142483068Z" level=info msg="cleaning up dead shim"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.151185195Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:02:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\n"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.774162475Z" level=info msg="RemoveContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\""
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.778503623Z" level=info msg="RemoveContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\" returns successfully"
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.832508669Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.845123983Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\""
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.845458906Z" level=info msg="StartContainer for \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\""
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.915513620Z" level=info msg="StartContainer for \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\" returns successfully"
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140294527Z" level=info msg="shim disconnected" id=ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140355763Z" level=warning msg="cleaning up after shim disconnected" id=ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 namespace=k8s.io
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140375603Z" level=info msg="cleaning up dead shim"
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.150012559Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:05:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3084 runtime=io.containerd.runc.v2\n"
	May 31 18:05:42 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:42.081916800Z" level=info msg="RemoveContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 18:05:42 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:42.085998365Z" level=info msg="RemoveContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531175323-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531175323-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531175323-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_54_00_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:53:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531175323-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:06:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220531175323-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                3f650030-6900-444d-b03b-802678a62df1
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220531175323-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-n856k                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220531175323-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220531175323-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8szbz                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220531175323-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66] <==
	* {"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220531175323-6903 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.710Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:55:16.484Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.714125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:55:16.484Z","caller":"traceutil/trace.go:171","msg":"trace[628559206] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:490; }","duration":"114.829832ms","start":"2022-05-31T17:55:16.369Z","end":"2022-05-31T17:55:16.484Z","steps":["trace[628559206] 'range keys from in-memory index tree'  (duration: 101.483588ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:56:07.995Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"269.683342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:07.995Z","caller":"traceutil/trace.go:171","msg":"trace[1668449359] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:502; }","duration":"269.788171ms","start":"2022-05-31T17:56:07.725Z","end":"2022-05-31T17:56:07.995Z","steps":["trace[1668449359] 'range keys from in-memory index tree'  (duration: 269.568906ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:08.636Z","caller":"traceutil/trace.go:171","msg":"trace[630377434] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"212.121446ms","start":"2022-05-31T17:56:08.424Z","end":"2022-05-31T17:56:08.636Z","steps":["trace[630377434] 'process raft request'  (duration: 175.233347ms)","trace[630377434] 'compare'  (duration: 36.797114ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:13.831Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.273396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:13.831Z","caller":"traceutil/trace.go:171","msg":"trace[1174066299] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:503; }","duration":"105.354475ms","start":"2022-05-31T17:56:13.726Z","end":"2022-05-31T17:56:13.831Z","steps":["trace[1174066299] 'range keys from in-memory index tree'  (duration: 105.148687ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:59:31.665Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"276.327041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.16f44254ee06029c\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2022-05-31T17:59:31.665Z","caller":"traceutil/trace.go:171","msg":"trace[668313946] range","detail":"{range_begin:/registry/events/default/busybox.16f44254ee06029c; range_end:; response_count:1; response_revision:560; }","duration":"276.422914ms","start":"2022-05-31T17:59:31.389Z","end":"2022-05-31T17:59:31.665Z","steps":["trace[668313946] 'agreement among raft nodes before linearized reading'  (duration: 93.332153ms)","trace[668313946] 'range keys from in-memory index tree'  (duration: 182.956428ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:59:31.771Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.247075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.16f4421cb5fb51d8\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2022-05-31T17:59:31.771Z","caller":"traceutil/trace.go:171","msg":"trace[34538388] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.16f4421cb5fb51d8; range_end:; response_count:1; response_revision:561; }","duration":"100.374417ms","start":"2022-05-31T17:59:31.670Z","end":"2022-05-31T17:59:31.771Z","steps":["trace[34538388] 'agreement among raft nodes before linearized reading'  (duration: 97.570176ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:03:54.970Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":552}
	{"level":"info","ts":"2022-05-31T18:03:54.971Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":552,"took":"442.928µs"}
	
	* 
	* ==> kernel <==
	*  18:06:17 up  1:48,  0 users,  load average: 0.52, 0.64, 1.20
	Linux no-preload-20220531175323-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509] <==
	* I0531 17:53:57.037883       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:53:57.037926       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:53:57.040098       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:53:57.101460       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:53:57.101814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:53:57.101904       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:53:57.936377       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:53:57.936399       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:53:57.941719       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:53:57.944313       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:53:57.944333       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:53:58.331802       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:53:58.360400       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:53:58.421652       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:53:58.426532       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0531 17:53:58.427280       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:53:58.430236       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:53:59.065574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:53:59.723054       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:53:59.729203       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:53:59.737186       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:54:04.817265       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:54:12.817514       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:54:12.904631       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:54:13.634581       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d] <==
	* I0531 17:54:12.902059       1 shared_informer.go:247] Caches are synced for node 
	I0531 17:54:12.902089       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 17:54:12.902093       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0531 17:54:12.902100       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 17:54:12.902118       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0531 17:54:12.902170       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0531 17:54:12.902710       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:54:12.902766       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0531 17:54:12.902864       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:54:12.903597       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ndl5c"
	I0531 17:54:12.916498       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8cptk"
	I0531 17:54:12.918139       1 range_allocator.go:374] Set node no-preload-20220531175323-6903 PodCIDR to [10.244.0.0/24]
	I0531 17:54:12.924160       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n856k"
	I0531 17:54:12.924335       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8szbz"
	I0531 17:54:13.002190       1 shared_informer.go:247] Caches are synced for cronjob 
	I0531 17:54:13.101463       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:54:13.101545       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.101587       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.111547       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:54:13.111574       1 disruption.go:371] Sending events to api server.
	I0531 17:54:13.138785       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:54:13.145126       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ndl5c"
	I0531 17:54:13.511066       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511116       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e] <==
	* I0531 17:54:13.606562       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 17:54:13.606652       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 17:54:13.606701       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:54:13.631765       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:54:13.631796       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:54:13.631804       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:54:13.631825       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:54:13.632185       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:54:13.632671       1 config.go:317] "Starting service config controller"
	I0531 17:54:13.632688       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:54:13.632706       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:54:13.632709       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:54:13.735397       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:54:13.735427       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b] <==
	* W0531 17:53:57.017973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.018249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.018293       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:57.018334       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:57.018340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.018350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:57.866168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.866195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.880204       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.880227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.004410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:58.004458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.020771       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:53:58.020798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:53:58.044767       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:53:58.044798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:53:58.102366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:53:58.102398       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:53:58.102392       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:53:58.102427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:53:58.155068       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:58.155107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:58.202503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:53:58.202555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 17:53:58.514561       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 18:06:17 UTC. --
	May 31 18:04:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:04:50.161044    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:04:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:04:55.162485    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:00.163450    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:05.164466    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:10.165312    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:15.166766    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:20 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:20.168194    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:25 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:25.168890    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:30 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:30.169815    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:35 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:35.170945    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:40 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:40.172582    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:42.080655    1738 scope.go:110] "RemoveContainer" containerID="52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:42.081008    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:42.081396    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:05:45 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:45.173478    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:50.174814    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:54 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:54.830002    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:05:54 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:54.830300    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:05:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:55.176156    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:00.177362    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:05.178242    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:08 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:06:08.829724    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:06:08 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:08.829981    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:06:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:10.178922    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:15.180399    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-8cptk storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner: exit status 1 (57.42477ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dw8dv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dw8dv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-8cptk" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531175323-6903
helpers_test.go:235: (dbg) docker inspect no-preload-20220531175323-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d",
	        "Created": "2022-05-31T17:53:25.199469079Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230732,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:53:25.538304199Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d-json.log",
	        "Name": "/no-preload-20220531175323-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531175323-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531175323-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531175323-6903",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531175323-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531175323-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6413ea608901d520cb420be1567e8fbd6f13d85f29fc8ae60c4095bc5f68676",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a6413ea60890",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531175323-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4f33d13fefc",
	                        "no-preload-20220531175323-6903"
	                    ],
	                    "NetworkID": "b2391a84ebd8e16dd2e9aca80777d6d03045cffc9cfc8290f45a61a1473c3244",
	                    "EndpointID": "81cd7594f26487ced42b2407b71e68ba6220c3d831ffa8d20b6ab5ac89aa38f6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220531175323-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | disable-driver-mounts-20220531175323-6903      | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |         |                |                     |                     |
	|         | --keep-context=false                                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:00:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:00:31.855034  253603 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:00:31.855128  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855137  253603 out.go:309] Setting ErrFile to fd 2...
	I0531 18:00:31.855169  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855275  253603 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:00:31.855500  253603 out.go:303] Setting JSON to false
	I0531 18:00:31.857002  253603 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6183,"bootTime":1654013849,"procs":755,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:00:31.857065  253603 start.go:125] virtualization: kvm guest
	I0531 18:00:31.859650  253603 out.go:177] * [newest-cni-20220531175602-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:00:31.861106  253603 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:00:31.861145  253603 notify.go:193] Checking for updates...
	I0531 18:00:31.863620  253603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:00:31.865010  253603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:31.866391  253603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:00:31.867875  253603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:00:31.871501  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:31.872091  253603 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:00:31.913476  253603 docker.go:137] docker version: linux-20.10.16
	I0531 18:00:31.913607  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.012796  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:31.941581138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.012892  253603 docker.go:254] overlay module found
	I0531 18:00:32.015694  253603 out.go:177] * Using the docker driver based on existing profile
	I0531 18:00:32.016948  253603 start.go:284] selected driver: docker
	I0531 18:00:32.016961  253603 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.017071  253603 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:00:32.017980  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.118816  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:32.047560918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.119131  253603 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 18:00:32.119167  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:32.119175  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:32.119195  253603 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119208  253603 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119215  253603 start_flags.go:306] config:
	{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.122424  253603 out.go:177] * Starting control plane node newest-cni-20220531175602-6903 in cluster newest-cni-20220531175602-6903
	I0531 18:00:32.123755  253603 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:00:32.125291  253603 out.go:177] * Pulling base image ...
	I0531 18:00:32.126765  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:32.126808  253603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:00:32.126822  253603 cache.go:57] Caching tarball of preloaded images
	I0531 18:00:32.126856  253603 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:00:32.127020  253603 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:00:32.127034  253603 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:00:32.127170  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.176155  253603 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:00:32.176180  253603 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:00:32.176199  253603 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:00:32.176233  253603 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:00:32.176322  253603 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 69.182µs
	I0531 18:00:32.176340  253603 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:00:32.176344  253603 fix.go:55] fixHost starting: 
	I0531 18:00:32.176560  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.209761  253603 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state=Stopped err=<nil>
	W0531 18:00:32.209791  253603 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:00:32.212875  253603 out.go:177] * Restarting existing docker container for "newest-cni-20220531175602-6903" ...
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.214225  253603 cli_runner.go:164] Run: docker start newest-cni-20220531175602-6903
	I0531 18:00:32.577327  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.610657  253603 kic.go:416] container "newest-cni-20220531175602-6903" state is running.
	I0531 18:00:32.611011  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:32.643675  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.643905  253603 machine.go:88] provisioning docker machine ...
	I0531 18:00:32.643932  253603 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 18:00:32.643983  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:32.674555  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:32.674809  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:32.674837  253603 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 18:00:32.675642  253603 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46432->127.0.0.1:49427: read: connection reset by peer
	I0531 18:00:35.795562  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 18:00:35.795625  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:35.826982  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:35.827166  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:35.827189  253603 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:00:35.938582  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:00:35.938614  253603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:00:35.938689  253603 ubuntu.go:177] setting up certificates
	I0531 18:00:35.938700  253603 provision.go:83] configureAuth start
	I0531 18:00:35.938739  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:35.970778  253603 provision.go:138] copyHostCerts
	I0531 18:00:35.970836  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:00:35.970855  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:00:35.970915  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:00:35.971070  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:00:35.971088  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:00:35.971129  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:00:35.971236  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:00:35.971254  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:00:35.971287  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:00:35.971355  253603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 18:00:36.142238  253603 provision.go:172] copyRemoteCerts
	I0531 18:00:36.142291  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:00:36.142320  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.173472  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.254066  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:00:36.271055  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:00:36.287105  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:00:36.302927  253603 provision.go:86] duration metric: configureAuth took 364.217481ms
	I0531 18:00:36.302948  253603 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:00:36.303122  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:36.303134  253603 machine.go:91] provisioned docker machine in 3.659215237s
	I0531 18:00:36.303168  253603 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 18:00:36.303175  253603 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:00:36.303216  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:00:36.303261  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.335634  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.418002  253603 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:00:36.420669  253603 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:00:36.420693  253603 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:00:36.420701  253603 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:00:36.420706  253603 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:00:36.420719  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:00:36.420765  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:00:36.420825  253603 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:00:36.420897  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:00:36.427208  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:36.443819  253603 start.go:309] post-start completed in 140.639246ms
	I0531 18:00:36.443888  253603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:00:36.443930  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.477971  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.555314  253603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:00:36.559129  253603 fix.go:57] fixHost completed within 4.38277864s
	I0531 18:00:36.559171  253603 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 4.382836668s
	I0531 18:00:36.559246  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:36.590986  253603 ssh_runner.go:195] Run: systemctl --version
	I0531 18:00:36.591023  253603 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:00:36.591084  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.591027  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.624550  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.625023  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.722476  253603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:00:36.732794  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:00:36.741236  253603 docker.go:187] disabling docker service ...
	I0531 18:00:36.741281  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:00:36.757377  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:00:36.765762  253603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:00:36.850081  253603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.930380  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:00:36.938984  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:00:36.951805  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:00:36.964223  253603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:00:36.970217  253603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:00:36.976123  253603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:00:37.050759  253603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:00:37.133255  253603 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:00:37.133326  253603 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:00:37.136650  253603 start.go:468] Will wait 60s for crictl version
	I0531 18:00:37.136705  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:37.162540  253603 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:00:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.209660  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:48.232631  253603 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:00:48.232687  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.260476  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.288516  253603 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:00:48.289983  253603 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:00:48.321110  253603 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 18:00:48.324362  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.335260  253603 out.go:177]   - kubelet.network-plugin=cni
	I0531 18:00:48.336944  253603 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 18:00:48.338457  253603 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 
	I0531 18:00:48.339824  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:48.339884  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.363681  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.363700  253603 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:00:48.363745  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.385839  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.385856  253603 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:00:48.385893  253603 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:00:48.408057  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:48.408077  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:48.408091  253603 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 18:00:48.408103  253603 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531175602-6903 NodeName:newest-cni-20220531175602-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:00:48.408230  253603 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220531175602-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:00:48.408307  253603 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531175602-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:00:48.408350  253603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:00:48.414874  253603 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:00:48.414928  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:00:48.421138  253603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0531 18:00:48.433792  253603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:00:48.447663  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0531 18:00:48.459853  253603 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:00:48.462496  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.470850  253603 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903 for IP: 192.168.58.2
	I0531 18:00:48.470935  253603 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:00:48.470970  253603 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:00:48.471030  253603 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key
	I0531 18:00:48.471080  253603 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041
	I0531 18:00:48.471114  253603 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key
	I0531 18:00:48.471247  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:00:48.471280  253603 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:00:48.471292  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:00:48.471322  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:00:48.471348  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:00:48.471369  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:00:48.471406  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:48.471990  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:00:48.487996  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:00:48.504050  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:00:48.520129  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:00:48.536197  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:00:48.551773  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:00:48.567698  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:00:48.583534  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:00:48.599284  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:00:48.615488  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:00:48.631736  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:00:48.648044  253603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:00:48.659819  253603 ssh_runner.go:195] Run: openssl version
	I0531 18:00:48.664514  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:00:48.671684  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674554  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674592  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.678953  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:00:48.685183  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:00:48.691850  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694734  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694775  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.699108  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:00:48.705843  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:00:48.713797  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716588  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716628  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.720988  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:00:48.727223  253603 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:48.727350  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:00:48.727391  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:48.751975  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:48.751998  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:48.752009  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:48.752025  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:48.752038  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:48.752051  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:48.752060  253603 cri.go:87] found id: ""
	I0531 18:00:48.752094  253603 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:00:48.763086  253603 cri.go:114] JSON = null
	W0531 18:00:48.763128  253603 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:00:48.763217  253603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:00:48.769482  253603 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:00:48.769502  253603 kubeadm.go:626] restartCluster start
	I0531 18:00:48.769537  253603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:00:48.775590  253603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.776475  253603 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531175602-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:48.777108  253603 kubeconfig.go:127] "newest-cni-20220531175602-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:00:48.777968  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:00:48.779498  253603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:00:48.785488  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.785519  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:48.793052  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.993429  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.993482  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.001612  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.193914  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.193974  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.202307  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.393581  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.393647  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.401876  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.594165  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.594228  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.602448  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.793873  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.793934  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.802272  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.993549  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.993606  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.002105  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.193422  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.193478  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.201805  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.394099  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.394197  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.402406  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.593662  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.593737  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.602754  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.794037  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.794083  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.803034  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.993253  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.993322  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.002295  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.193608  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.193667  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.201663  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.393968  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.394033  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.402169  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.593519  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.593576  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.602288  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.793534  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.793598  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.803943  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.803964  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.803995  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.812522  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.812554  253603 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:00:51.812560  253603 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:00:51.812574  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:00:51.812615  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:51.839954  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:51.839976  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:51.839982  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:51.839989  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:51.839994  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:51.840001  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:51.840013  253603 cri.go:87] found id: ""
	I0531 18:00:51.840018  253603 cri.go:232] Stopping containers: [776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b]
	I0531 18:00:51.840059  253603 ssh_runner.go:195] Run: which crictl
	I0531 18:00:51.842973  253603 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b
	I0531 18:00:51.869603  253603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:00:51.880644  253603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:00:51.887664  253603 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:59 /etc/kubernetes/scheduler.conf
	
	I0531 18:00:51.887720  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:00:51.894538  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:00:51.901534  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.908371  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.908424  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.917592  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:00:51.925101  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.925151  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:00:51.931258  253603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937908  253603 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937925  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:51.981409  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.730818  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.866579  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.918070  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.960507  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:00:52.960554  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.469301  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.969201  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.469096  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.968777  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.468873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.968873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.468973  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.969026  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.468917  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.968887  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.469411  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.969742  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:59.011037  253603 api_server.go:71] duration metric: took 6.050532367s to wait for apiserver process to appear ...
	I0531 18:00:59.011067  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:00:59.011079  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:00:59.011494  253603 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0531 18:00:59.512207  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.105106  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:01:02.105133  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:01:02.512478  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.516889  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:02.516910  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.012313  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.016705  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:03.016731  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.512288  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.516555  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:03.522009  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:03.522027  253603 api_server.go:130] duration metric: took 4.510954896s to wait for apiserver health ...
	I0531 18:01:03.522036  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:01:03.522043  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:01:03.524134  253603 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:01:03.525439  253603 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:01:03.529095  253603 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:01:03.529112  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:01:03.541449  253603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:01:04.388379  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.394833  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.394868  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394878  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.394887  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.394895  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.394908  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.394914  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.394927  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.394933  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394938  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394945  253603 system_pods.go:74] duration metric: took 6.541942ms to wait for pod list to return data ...
	I0531 18:01:04.394952  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.397297  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.397318  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.397328  253603 node_conditions.go:105] duration metric: took 2.369222ms to run NodePressure ...
	I0531 18:01:04.397343  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:01:04.522242  253603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:01:04.528860  253603 ops.go:34] apiserver oom_adj: -16
	I0531 18:01:04.528888  253603 kubeadm.go:630] restartCluster took 15.759378612s
	I0531 18:01:04.528897  253603 kubeadm.go:397] StartCluster complete in 15.801681788s
	I0531 18:01:04.528917  253603 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.529033  253603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:01:04.530679  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.533767  253603 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531175602-6903" rescaled to 1
	I0531 18:01:04.533818  253603 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:01:04.533838  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:01:04.536326  253603 out.go:177] * Verifying Kubernetes components...
	I0531 18:01:04.533856  253603 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 18:01:04.534015  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:01:04.537649  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:04.537683  253603 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537700  253603 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537715  253603 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.537721  253603 addons.go:165] addon storage-provisioner should already be in state true
	W0531 18:01:04.537727  253603 addons.go:165] addon metrics-server should already be in state true
	I0531 18:01:04.537767  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537777  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537687  253603 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537727  253603 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537814  253603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531175602-6903"
	W0531 18:01:04.537839  253603 addons.go:165] addon dashboard should already be in state true
	I0531 18:01:04.537886  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.538099  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538258  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538288  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538354  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.582251  253603 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:01:04.583780  253603 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.585078  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:01:04.585101  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:01:04.585148  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.586519  253603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:01:04.588458  253603 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.589819  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:01:04.589835  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:01:04.589870  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.588540  253603 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.589914  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:01:04.589608  253603 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.589994  253603 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:01:04.590025  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.590456  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.589970  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.622440  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:01:04.622511  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:01:04.622642  253603 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 18:01:04.633508  253603 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.633529  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:01:04.633581  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.633878  253603 api_server.go:71] duration metric: took 100.025723ms to wait for apiserver process to appear ...
	I0531 18:01:04.633902  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:01:04.633915  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:04.636308  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.639626  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:04.640522  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:04.640542  253603 api_server.go:130] duration metric: took 6.632874ms to wait for apiserver health ...
	I0531 18:01:04.640552  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.641487  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.650123  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.651235  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.651429  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651499  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.651514  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.651525  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.651537  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.651547  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.651557  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.651565  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651574  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651580  253603 system_pods.go:74] duration metric: took 11.022992ms to wait for pod list to return data ...
	I0531 18:01:04.651588  253603 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:01:04.653854  253603 default_sa.go:45] found service account: "default"
	I0531 18:01:04.653878  253603 default_sa.go:55] duration metric: took 2.284188ms for default service account to be created ...
	I0531 18:01:04.653893  253603 kubeadm.go:572] duration metric: took 120.041989ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 18:01:04.653922  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.656488  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.656514  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.656527  253603 node_conditions.go:105] duration metric: took 2.599307ms to run NodePressure ...
	I0531 18:01:04.656538  253603 start.go:213] waiting for startup goroutines ...
	I0531 18:01:04.673010  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.728342  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:01:04.728368  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:01:04.736428  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:01:04.736451  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:01:04.742828  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:01:04.742852  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:01:04.746024  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.750055  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:01:04.750076  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:01:04.758284  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.758304  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:01:04.801922  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:01:04.801947  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:01:04.802275  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.807930  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.820976  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:01:04.821004  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:01:04.911836  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:01:04.911866  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:01:04.931751  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:01:04.931779  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:01:05.022410  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:01:05.022437  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:01:05.105670  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:01:05.105701  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:01:05.123433  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.123460  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:01:05.202647  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.305415  253603 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531175602-6903"
	I0531 18:01:05.471026  253603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:01:05.472226  253603 addons.go:417] enableAddons completed in 938.375737ms
	I0531 18:01:05.510490  253603 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0531 18:01:05.512509  253603 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531175602-6903" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ef96bc146e16f       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   df6575866fdea
	b12fda9e12e52       4c03754524064       12 minutes ago      Running             kube-proxy                0                   988de4837f61f
	91afec248cd26       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   8ae5c296424b2
	2d2cb82735b88       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   ac6eb1a1a0685
	0d1755990bfb1       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   dc54f5b9ebd0e
	c25ff47b27774       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   d852e12f002ef
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 18:06:19 UTC. --
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.540719640Z" level=warning msg="cleaning up after shim disconnected" id=7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb namespace=k8s.io
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.540735610Z" level=info msg="cleaning up dead shim"
	May 31 17:59:39 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:39.549493789Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:59:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n"
	May 31 17:59:40 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:40.476348155Z" level=info msg="RemoveContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\""
	May 31 17:59:40 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:40.480884825Z" level=info msg="RemoveContainer for \"2cf4809512c1b82745b3759c6455f840d841645acc9ebaf1806ad2393ecca2ee\" returns successfully"
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.832949086Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.848216078Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.848792123Z" level=info msg="StartContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 17:59:53 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T17:59:53.917342023Z" level=info msg="StartContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\" returns successfully"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142401191Z" level=info msg="shim disconnected" id=52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142467418Z" level=warning msg="cleaning up after shim disconnected" id=52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21 namespace=k8s.io
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.142483068Z" level=info msg="cleaning up dead shim"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.151185195Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:02:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\n"
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.774162475Z" level=info msg="RemoveContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\""
	May 31 18:02:34 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:02:34.778503623Z" level=info msg="RemoveContainer for \"7b4a921aa6a0031f2edf3d7bda1bc2ff3de9ec54e38365bc1d9f6607f5689bbb\" returns successfully"
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.832508669Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.845123983Z" level=info msg="CreateContainer within sandbox \"df6575866fdeac907c0b1600d02151c0657880b5cff3a10fb579edeb7ff44724\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\""
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.845458906Z" level=info msg="StartContainer for \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\""
	May 31 18:03:00 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:03:00.915513620Z" level=info msg="StartContainer for \"ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516\" returns successfully"
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140294527Z" level=info msg="shim disconnected" id=ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140355763Z" level=warning msg="cleaning up after shim disconnected" id=ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 namespace=k8s.io
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.140375603Z" level=info msg="cleaning up dead shim"
	May 31 18:05:41 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:41.150012559Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:05:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3084 runtime=io.containerd.runc.v2\n"
	May 31 18:05:42 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:42.081916800Z" level=info msg="RemoveContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\""
	May 31 18:05:42 no-preload-20220531175323-6903 containerd[503]: time="2022-05-31T18:05:42.085998365Z" level=info msg="RemoveContainer for \"52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531175323-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531175323-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531175323-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_54_00_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:53:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531175323-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:06:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:04:47 +0000   Tue, 31 May 2022 17:53:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220531175323-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                3f650030-6900-444d-b03b-802678a62df1
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220531175323-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-n856k                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220531175323-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220531175323-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8szbz                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220531175323-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66] <==
	* {"level":"info","ts":"2022-05-31T17:53:54.707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T17:53:54.707Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220531175323-6903 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.708Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:53:54.709Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:53:54.710Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:55:16.484Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"114.714125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:55:16.484Z","caller":"traceutil/trace.go:171","msg":"trace[628559206] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:490; }","duration":"114.829832ms","start":"2022-05-31T17:55:16.369Z","end":"2022-05-31T17:55:16.484Z","steps":["trace[628559206] 'range keys from in-memory index tree'  (duration: 101.483588ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:56:07.995Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"269.683342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:07.995Z","caller":"traceutil/trace.go:171","msg":"trace[1668449359] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:502; }","duration":"269.788171ms","start":"2022-05-31T17:56:07.725Z","end":"2022-05-31T17:56:07.995Z","steps":["trace[1668449359] 'range keys from in-memory index tree'  (duration: 269.568906ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:08.636Z","caller":"traceutil/trace.go:171","msg":"trace[630377434] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"212.121446ms","start":"2022-05-31T17:56:08.424Z","end":"2022-05-31T17:56:08.636Z","steps":["trace[630377434] 'process raft request'  (duration: 175.233347ms)","trace[630377434] 'compare'  (duration: 36.797114ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:13.831Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.273396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-20220531175323-6903\" ","response":"range_response_count:1 size:3986"}
	{"level":"info","ts":"2022-05-31T17:56:13.831Z","caller":"traceutil/trace.go:171","msg":"trace[1174066299] range","detail":"{range_begin:/registry/minions/no-preload-20220531175323-6903; range_end:; response_count:1; response_revision:503; }","duration":"105.354475ms","start":"2022-05-31T17:56:13.726Z","end":"2022-05-31T17:56:13.831Z","steps":["trace[1174066299] 'range keys from in-memory index tree'  (duration: 105.148687ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:59:31.665Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"276.327041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.16f44254ee06029c\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2022-05-31T17:59:31.665Z","caller":"traceutil/trace.go:171","msg":"trace[668313946] range","detail":"{range_begin:/registry/events/default/busybox.16f44254ee06029c; range_end:; response_count:1; response_revision:560; }","duration":"276.422914ms","start":"2022-05-31T17:59:31.389Z","end":"2022-05-31T17:59:31.665Z","steps":["trace[668313946] 'agreement among raft nodes before linearized reading'  (duration: 93.332153ms)","trace[668313946] 'range keys from in-memory index tree'  (duration: 182.956428ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:59:31.771Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.247075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.16f4421cb5fb51d8\" ","response":"range_response_count:1 size:724"}
	{"level":"info","ts":"2022-05-31T17:59:31.771Z","caller":"traceutil/trace.go:171","msg":"trace[34538388] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.16f4421cb5fb51d8; range_end:; response_count:1; response_revision:561; }","duration":"100.374417ms","start":"2022-05-31T17:59:31.670Z","end":"2022-05-31T17:59:31.771Z","steps":["trace[34538388] 'agreement among raft nodes before linearized reading'  (duration: 97.570176ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:03:54.970Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":552}
	{"level":"info","ts":"2022-05-31T18:03:54.971Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":552,"took":"442.928µs"}
	
	* 
	* ==> kernel <==
	*  18:06:19 up  1:48,  0 users,  load average: 0.52, 0.64, 1.20
	Linux no-preload-20220531175323-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509] <==
	* I0531 17:53:57.037883       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:53:57.037926       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:53:57.040098       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:53:57.101460       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:53:57.101814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:53:57.101904       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:53:57.936377       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:53:57.936399       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:53:57.941719       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:53:57.944313       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:53:57.944333       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:53:58.331802       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:53:58.360400       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:53:58.421652       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:53:58.426532       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0531 17:53:58.427280       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:53:58.430236       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:53:59.065574       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:53:59.723054       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:53:59.729203       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:53:59.737186       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:54:04.817265       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:54:12.817514       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:54:12.904631       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:54:13.634581       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d] <==
	* I0531 17:54:12.902059       1 shared_informer.go:247] Caches are synced for node 
	I0531 17:54:12.902089       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 17:54:12.902093       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0531 17:54:12.902100       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 17:54:12.902118       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0531 17:54:12.902170       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0531 17:54:12.902710       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:54:12.902766       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0531 17:54:12.902864       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:54:12.903597       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ndl5c"
	I0531 17:54:12.916498       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8cptk"
	I0531 17:54:12.918139       1 range_allocator.go:374] Set node no-preload-20220531175323-6903 PodCIDR to [10.244.0.0/24]
	I0531 17:54:12.924160       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n856k"
	I0531 17:54:12.924335       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8szbz"
	I0531 17:54:13.002190       1 shared_informer.go:247] Caches are synced for cronjob 
	I0531 17:54:13.101463       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:54:13.101545       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.101587       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:54:13.111547       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:54:13.111574       1 disruption.go:371] Sending events to api server.
	I0531 17:54:13.138785       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:54:13.145126       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ndl5c"
	I0531 17:54:13.511066       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511070       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:54:13.511116       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e] <==
	* I0531 17:54:13.606562       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 17:54:13.606652       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 17:54:13.606701       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:54:13.631765       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:54:13.631796       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:54:13.631804       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:54:13.631825       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:54:13.632185       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:54:13.632671       1 config.go:317] "Starting service config controller"
	I0531 17:54:13.632688       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:54:13.632706       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:54:13.632709       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:54:13.735397       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:54:13.735427       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b] <==
	* W0531 17:53:57.017973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.018249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.018293       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:57.018334       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:57.018340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.018350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:57.866168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:53:57.866195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:53:57.880204       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:57.880227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.004410       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:53:58.004458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:53:58.020771       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:53:58.020798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:53:58.044767       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:53:58.044798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:53:58.102366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:53:58.102398       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:53:58.102392       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:53:58.102427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:53:58.155068       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:53:58.155107       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:53:58.202503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:53:58.202555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0531 17:53:58.514561       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:53:25 UTC, end at Tue 2022-05-31 18:06:19 UTC. --
	May 31 18:04:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:04:50.161044    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:04:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:04:55.162485    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:00.163450    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:05.164466    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:10.165312    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:15.166766    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:20 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:20.168194    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:25 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:25.168890    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:30 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:30.169815    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:35 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:35.170945    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:40 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:40.172582    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:42.080655    1738 scope.go:110] "RemoveContainer" containerID="52a49c81004a31d8331ccc40f7eea42ad6fde8a9da6e87a95ba8d15800d3cc21"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:42.081008    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:05:42 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:42.081396    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:05:45 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:45.173478    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:50 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:50.174814    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:05:54 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:05:54.830002    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:05:54 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:54.830300    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:05:55 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:05:55.176156    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:00 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:00.177362    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:05 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:05.178242    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:08 no-preload-20220531175323-6903 kubelet[1738]: I0531 18:06:08.829724    1738 scope.go:110] "RemoveContainer" containerID="ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	May 31 18:06:08 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:08.829981    1738 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n856k_kube-system(1bf232e0-3302-4413-8693-378d7bcc2bad)\"" pod="kube-system/kindnet-n856k" podUID=1bf232e0-3302-4413-8693-378d7bcc2bad
	May 31 18:06:10 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:10.178922    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:15 no-preload-20220531175323-6903 kubelet[1738]: E0531 18:06:15.180399    1738 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-8cptk storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner: exit status 1 (54.720162ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dw8dv (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dw8dv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  48s (x8 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-8cptk" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531175323-6903 describe pod busybox coredns-64897985d-8cptk storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ea1ac2f7-13ef-43d6-a292-61cbbfc28f3c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 17:59:58.425284    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.430551    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.440767    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.461935    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.502260    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.583226    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:58.743612    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.064646    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.591264    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 17:59:59.705128    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.712305    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.717566    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.727808    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.748098    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.788367    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 17:59:59.868623    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:00.029275    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:00.350303    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:00.985681    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:00.990795    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:02.271418    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:03.546657    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:04.832358    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:08.667474    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:09.953451    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
start_stop_delete_test.go:198: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-05-31 18:07:54.33512739 +0000 UTC m=+3325.860273663
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context default-k8s-different-port-20220531175509-6903 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68wn9 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-68wn9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  47s (x8 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context default-k8s-different-port-20220531175509-6903 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531175509-6903
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531175509-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a",
	        "Created": "2022-05-31T17:55:17.80847266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238395,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:55:18.158165808Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hosts",
	        "LogPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a-json.log",
	        "Name": "/default-k8s-different-port-20220531175509-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531175509-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531175509-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531175509-6903",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531175509-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531175509-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8eaff00a202d06cce1c8d58235602194947fd26c7a48f709899b5f65739bc85",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8eaff00a202",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531175509-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b24400321365",
	                        "default-k8s-different-port-20220531175509-6903"
	                    ],
	                    "NetworkID": "6fc1f79f54eab1e8df36883c8283b483c18aa0e383b30bdb7aa37eb035c0586e",
	                    "EndpointID": "2b828753c599e8680fae2d033551c2f135b67a4addb875098c2181def1415f01",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220531175509-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:06:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:06:31.856563  261225 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:06:31.856712  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856722  261225 out.go:309] Setting ErrFile to fd 2...
	I0531 18:06:31.856727  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856832  261225 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:06:31.857034  261225 out.go:303] Setting JSON to false
	I0531 18:06:31.858042  261225 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6543,"bootTime":1654013849,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:06:31.858099  261225 start.go:125] virtualization: kvm guest
	I0531 18:06:31.860371  261225 out.go:177] * [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:06:31.861722  261225 notify.go:193] Checking for updates...
	I0531 18:06:31.861741  261225 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:06:31.863130  261225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:06:31.864624  261225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:31.865934  261225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:06:31.867316  261225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:06:31.868940  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:31.869397  261225 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:06:31.907400  261225 docker.go:137] docker version: linux-20.10.16
	I0531 18:06:31.907473  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.005401  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:31.935579157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.005490  261225 docker.go:254] overlay module found
	I0531 18:06:32.008184  261225 out.go:177] * Using the docker driver based on existing profile
	I0531 18:06:32.009506  261225 start.go:284] selected driver: docker
	I0531 18:06:32.009519  261225 start.go:806] validating driver "docker" against &{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.009608  261225 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:06:32.010442  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.108530  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:32.039046549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.108794  261225 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:06:32.108817  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:32.108827  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:32.108849  261225 start_flags.go:306] config:
	{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.111037  261225 out.go:177] * Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	I0531 18:06:32.112380  261225 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:06:32.113769  261225 out.go:177] * Pulling base image ...
	I0531 18:06:32.115200  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:32.115228  261225 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:06:32.115343  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.115478  261225 cache.go:107] acquiring lock: {Name:mke7c3123bbb887802876b6038e785eff1d65578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115516  261225 cache.go:107] acquiring lock: {Name:mkccfd735c16da1ed9ea4fc459feb477365b33a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115520  261225 cache.go:107] acquiring lock: {Name:mk598b9f501113e758a5b1053c8a9a41e87e7c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115517  261225 cache.go:107] acquiring lock: {Name:mk92196aa514c10ef84dd2326a35399f7c3719a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115545  261225 cache.go:107] acquiring lock: {Name:mk59854aac2611f794ffa59524077b81afbc7de4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115552  261225 cache.go:107] acquiring lock: {Name:mk37d69d4525de4b98ff3597b4269e1680132b96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115480  261225 cache.go:107] acquiring lock: {Name:mka8d6fd8013f251c85f4bca8a18522e173be81e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115558  261225 cache.go:107] acquiring lock: {Name:mk4a95c9ed8757a79d1e9fa1e44efcaead7631e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115785  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 18:06:32.115815  261225 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 348.663µs
	I0531 18:06:32.115829  261225 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 18:06:32.115875  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0531 18:06:32.115877  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0531 18:06:32.115899  261225 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 392.805µs
	I0531 18:06:32.115911  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0531 18:06:32.115912  261225 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0531 18:06:32.115913  261225 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 404.132µs
	I0531 18:06:32.115930  261225 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 399.123µs
	I0531 18:06:32.115947  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0531 18:06:32.115972  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0531 18:06:32.115973  261225 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 444.025µs
	I0531 18:06:32.115992  261225 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 526.55µs
	I0531 18:06:32.115932  261225 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0531 18:06:32.115998  261225 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0531 18:06:32.116024  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0531 18:06:32.115948  261225 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0531 18:06:32.115887  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0531 18:06:32.116038  261225 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 484.283µs
	I0531 18:06:32.116056  261225 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0531 18:06:32.116054  261225 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 533.964µs
	I0531 18:06:32.116074  261225 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0531 18:06:32.116007  261225 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0531 18:06:32.116089  261225 cache.go:87] Successfully saved all images to host disk.
	I0531 18:06:32.161016  261225 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:06:32.161038  261225 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:06:32.161053  261225 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:06:32.161092  261225 start.go:352] acquiring machines lock for no-preload-20220531175323-6903: {Name:mk8635283b759be2fcd7aacbafc64b0c778ff5b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.161181  261225 start.go:356] acquired machines lock for "no-preload-20220531175323-6903" in 68.368µs
	I0531 18:06:32.161203  261225 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:06:32.161208  261225 fix.go:55] fixHost starting: 
	I0531 18:06:32.161424  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.191567  261225 fix.go:103] recreateIfNeeded on no-preload-20220531175323-6903: state=Stopped err=<nil>
	W0531 18:06:32.191592  261225 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:06:32.194700  261225 out.go:177] * Restarting existing docker container for "no-preload-20220531175323-6903" ...
	I0531 18:06:32.196063  261225 cli_runner.go:164] Run: docker start no-preload-20220531175323-6903
	I0531 18:06:32.572533  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.606201  261225 kic.go:416] container "no-preload-20220531175323-6903" state is running.
	I0531 18:06:32.606544  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:32.637813  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.637995  261225 machine.go:88] provisioning docker machine ...
	I0531 18:06:32.638016  261225 ubuntu.go:169] provisioning hostname "no-preload-20220531175323-6903"
	I0531 18:06:32.638050  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:32.668506  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:32.668682  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:32.668704  261225 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220531175323-6903 && echo "no-preload-20220531175323-6903" | sudo tee /etc/hostname
	I0531 18:06:32.669243  261225 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46970->127.0.0.1:49432: read: connection reset by peer
	I0531 18:06:35.786250  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220531175323-6903
	
	I0531 18:06:35.786326  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:35.821236  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:35.821365  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:35.821383  261225 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220531175323-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220531175323-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220531175323-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:06:35.934343  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:06:35.934366  261225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:06:35.934410  261225 ubuntu.go:177] setting up certificates
	I0531 18:06:35.934428  261225 provision.go:83] configureAuth start
	I0531 18:06:35.934476  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:35.965223  261225 provision.go:138] copyHostCerts
	I0531 18:06:35.965272  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:06:35.965282  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:06:35.965344  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:06:35.965427  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:06:35.965439  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:06:35.965462  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:06:35.965511  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:06:35.965519  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:06:35.965539  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:06:35.965578  261225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220531175323-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220531175323-6903]
	I0531 18:06:36.057355  261225 provision.go:172] copyRemoteCerts
	I0531 18:06:36.057402  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:06:36.057430  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.089999  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.169898  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0531 18:06:36.186339  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:06:36.202145  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:06:36.217945  261225 provision.go:86] duration metric: configureAuth took 283.507566ms
	I0531 18:06:36.217967  261225 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:06:36.218141  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:36.218159  261225 machine.go:91] provisioned docker machine in 3.58014978s
	I0531 18:06:36.218168  261225 start.go:306] post-start starting for "no-preload-20220531175323-6903" (driver="docker")
	I0531 18:06:36.218179  261225 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:06:36.218216  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:06:36.218249  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.250462  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.329903  261225 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:06:36.332443  261225 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:06:36.332472  261225 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:06:36.332481  261225 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:06:36.332487  261225 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:06:36.332499  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:06:36.332539  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:06:36.332602  261225 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:06:36.332675  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:06:36.338862  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:36.355254  261225 start.go:309] post-start completed in 137.071829ms
	I0531 18:06:36.355304  261225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:06:36.355336  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.386735  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.467076  261225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:06:36.470822  261225 fix.go:57] fixHost completed within 4.309609112s
	I0531 18:06:36.470844  261225 start.go:81] releasing machines lock for "no-preload-20220531175323-6903", held for 4.309648254s
	I0531 18:06:36.470905  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:36.502427  261225 ssh_runner.go:195] Run: systemctl --version
	I0531 18:06:36.502473  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.502475  261225 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:06:36.502528  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.537057  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.539320  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.638776  261225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:06:36.649832  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:06:36.658496  261225 docker.go:187] disabling docker service ...
	I0531 18:06:36.658539  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:06:36.667272  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:06:36.675216  261225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:06:36.752203  261225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:06:36.818959  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:06:36.827401  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:06:36.839221  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:06:36.851589  261225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:06:36.857335  261225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:06:36.865201  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:06:36.934383  261225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:06:37.001672  261225 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:06:37.001743  261225 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:06:37.005089  261225 start.go:468] Will wait 60s for crictl version
	I0531 18:06:37.005161  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:37.030007  261225 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:06:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:06:48.077720  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:48.100248  261225 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:06:48.100298  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.127707  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.157240  261225 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:06:48.158764  261225 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:06:48.189984  261225 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 18:06:48.193238  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.203917  261225 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:06:48.205236  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:48.205283  261225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:06:48.227240  261225 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:06:48.227263  261225 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:06:48.227305  261225 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:06:48.249494  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:48.249514  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:48.249533  261225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:06:48.249549  261225 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220531175323-6903 NodeName:no-preload-20220531175323-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:06:48.249720  261225 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220531175323-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:06:48.249812  261225 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220531175323-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:06:48.249865  261225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:06:48.256345  261225 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:06:48.256398  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:06:48.262969  261225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0531 18:06:48.274664  261225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:06:48.287040  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0531 18:06:48.299091  261225 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:06:48.301889  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.310656  261225 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903 for IP: 192.168.67.2
	I0531 18:06:48.310742  261225 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:06:48.310777  261225 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:06:48.310834  261225 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key
	I0531 18:06:48.310884  261225 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e
	I0531 18:06:48.310918  261225 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key
	I0531 18:06:48.310996  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:06:48.311025  261225 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:06:48.311034  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:06:48.311059  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:06:48.311084  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:06:48.311106  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:06:48.311181  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:48.311875  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:06:48.328351  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:06:48.344708  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:06:48.361384  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:06:48.377622  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:06:48.393772  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:06:48.409607  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:06:48.425962  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:06:48.441752  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:06:48.457422  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:06:48.473322  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:06:48.489365  261225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:06:48.501512  261225 ssh_runner.go:195] Run: openssl version
	I0531 18:06:48.505937  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:06:48.512677  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515513  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515567  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.520028  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:06:48.526318  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:06:48.533197  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536004  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536048  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.540484  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:06:48.546655  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:06:48.553433  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556334  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556368  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.560699  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:06:48.566833  261225 kubeadm.go:395] StartCluster: {Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:48.566936  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:06:48.566963  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:48.590607  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:48.590629  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:48.590640  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:48.590651  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:48.590665  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:48.590677  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:48.590684  261225 cri.go:87] found id: ""
	I0531 18:06:48.590707  261225 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:06:48.601948  261225 cri.go:114] JSON = null
	W0531 18:06:48.601985  261225 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:06:48.602021  261225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:06:48.608119  261225 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:06:48.608137  261225 kubeadm.go:626] restartCluster start
	I0531 18:06:48.608162  261225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:06:48.613826  261225 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.614554  261225 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220531175323-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:48.615039  261225 kubeconfig.go:127] "no-preload-20220531175323-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:06:48.615784  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:06:48.617278  261225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:06:48.623232  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.623290  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.630395  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.830763  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.830820  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.838930  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.031184  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.031241  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.039494  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.230727  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.230797  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.239312  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.430567  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.430662  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.438967  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.631308  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.631386  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.640008  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.831278  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.831352  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.839490  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.030797  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.030869  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.039659  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.230992  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.231065  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.239370  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.430595  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.430703  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.438937  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.631190  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.631256  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.639827  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.831099  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.831190  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.839564  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.030836  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.030912  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.039250  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.230475  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.230546  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.238738  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.431028  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.431083  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.439535  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.630862  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.630914  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.639047  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.639064  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.639103  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.646503  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.646523  261225 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:06:51.646531  261225 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:06:51.646545  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:06:51.646589  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:51.669569  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:51.669588  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:51.669595  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:51.669601  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:51.669608  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:51.669617  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:51.669633  261225 cri.go:87] found id: ""
	I0531 18:06:51.669640  261225 cri.go:232] Stopping containers: [ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66]
	I0531 18:06:51.669675  261225 ssh_runner.go:195] Run: which crictl
	I0531 18:06:51.672277  261225 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66
	I0531 18:06:51.696665  261225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:06:51.706131  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:06:51.712590  261225 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:53 /etc/kubernetes/scheduler.conf
	
	I0531 18:06:51.712632  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:06:51.718730  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:06:51.724887  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.731013  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.731060  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.737056  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:06:51.743102  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.743164  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:06:51.748937  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755338  261225 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755353  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:51.795954  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.528000  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.654713  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.709489  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.750049  261225 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:06:52.750109  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.257876  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.757835  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.257770  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.757793  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.258138  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.757795  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.258203  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.758036  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.257882  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.757890  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.258306  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.758044  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.810957  261225 api_server.go:71] duration metric: took 6.060906737s to wait for apiserver process to appear ...
	I0531 18:06:58.810993  261225 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:06:58.811006  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:06:58.811421  261225 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0531 18:06:59.312100  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.519859  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:07:01.519904  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:07:01.812506  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.816767  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:01.816787  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.312284  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.316938  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:02.316963  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.812304  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.817359  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 18:07:02.822648  261225 api_server.go:140] control plane version: v1.23.6
	I0531 18:07:02.822669  261225 api_server.go:130] duration metric: took 4.011670774s to wait for apiserver health ...
	I0531 18:07:02.822682  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:07:02.822688  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:07:02.825359  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:07:02.826864  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:07:02.830365  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:07:02.830389  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:07:02.844337  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:07:03.565042  261225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:07:03.571091  261225 system_pods.go:59] 9 kube-system pods found
	I0531 18:07:03.571119  261225 system_pods.go:61] "coredns-64897985d-8cptk" [b7548080-9210-497c-9a72-e3d0dc790731] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571127  261225 system_pods.go:61] "etcd-no-preload-20220531175323-6903" [0c3833e1-4748-46be-b9f9-ba9743784100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:07:03.571136  261225 system_pods.go:61] "kindnet-n856k" [1bf232e0-3302-4413-8693-378d7bcc2bad] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:07:03.571183  261225 system_pods.go:61] "kube-apiserver-no-preload-20220531175323-6903" [a04b08e1-09a2-4700-97ef-1d46decd0195] Running
	I0531 18:07:03.571194  261225 system_pods.go:61] "kube-controller-manager-no-preload-20220531175323-6903" [fc4e03c4-6dfa-492c-b27f-80c7dde0de7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:07:03.571207  261225 system_pods.go:61] "kube-proxy-8szbz" [e7e66d9f-358e-4d5f-b12d-541da7f43741] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:07:03.571216  261225 system_pods.go:61] "kube-scheduler-no-preload-20220531175323-6903" [5399c2c9-e9f9-4208-9bd3-f922cc3f4f6b] Running
	I0531 18:07:03.571224  261225 system_pods.go:61] "metrics-server-b955d9d8-bsgtk" [5c43931e-ba07-4e57-b438-73e230ac2391] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571230  261225 system_pods.go:61] "storage-provisioner" [a98841d0-cbd8-464c-b5bc-542abbaf8a0b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571237  261225 system_pods.go:74] duration metric: took 6.174332ms to wait for pod list to return data ...
	I0531 18:07:03.571248  261225 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:07:03.573670  261225 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:07:03.573690  261225 node_conditions.go:123] node cpu capacity is 8
	I0531 18:07:03.573700  261225 node_conditions.go:105] duration metric: took 2.442916ms to run NodePressure ...
	I0531 18:07:03.573714  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:07:03.691657  261225 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695473  261225 kubeadm.go:777] kubelet initialised
	I0531 18:07:03.695496  261225 kubeadm.go:778] duration metric: took 3.812908ms waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695502  261225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:07:03.699872  261225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	I0531 18:07:05.705225  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:08.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:10.205511  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:12.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:15.204717  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:17.204780  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:19.205209  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:21.205381  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:23.704908  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:26.205961  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:28.705082  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:31.205047  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:33.205742  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:35.705103  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:38.205545  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:40.206261  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:42.704687  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:44.705052  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:47.205179  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:49.205593  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:51.704646  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	52b0fa46cdf51       6de166512aa22       5 minutes ago       Exited              kindnet-cni               6                   512d6145343b2
	cb3e6f9b5d67c       4c03754524064       12 minutes ago      Running             kube-proxy                0                   95cdf505c32bc
	a2c6538b95f74       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   1f2c20e63b683
	1b1996168f6e9       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   6051433bcfd54
	509e04aaab068       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   de31468fb264b
	ea294bc0a9be2       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   3eec3f7ca8031
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 18:07:55 UTC. --
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436395795Z" level=warning msg="cleaning up after shim disconnected" id=5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34 namespace=k8s.io
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436406915Z" level=info msg="cleaning up dead shim"
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.445684136Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\n"
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.415891765Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\""
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.419889760Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\" returns successfully"
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.032279346Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.045343277Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.046165472Z" level=info msg="StartContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.206457331Z" level=info msg="StartContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\" returns successfully"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450804664Z" level=info msg="shim disconnected" id=42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450859652Z" level=warning msg="cleaning up after shim disconnected" id=42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042 namespace=k8s.io
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450873065Z" level=info msg="cleaning up dead shim"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.460091371Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2682 runtime=io.containerd.runc.v2\n"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.593344444Z" level=info msg="RemoveContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\""
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.597811596Z" level=info msg="RemoveContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\" returns successfully"
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.031220176Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.042697798Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\""
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.043204599Z" level=info msg="StartContainer for \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\""
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.120003489Z" level=info msg="StartContainer for \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\" returns successfully"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340195220Z" level=info msg="shim disconnected" id=52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340258399Z" level=warning msg="cleaning up after shim disconnected" id=52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 namespace=k8s.io
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340274379Z" level=info msg="cleaning up dead shim"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.348849445Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:03:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2773 runtime=io.containerd.runc.v2\n"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.904094952Z" level=info msg="RemoveContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.908188119Z" level=info msg="RemoveContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531175509-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531175509-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_55_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:55:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531175509-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:07:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220531175509-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                6be22935-bf30-494f-8e0a-066b777ef988
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220531175509-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-vdbp9                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531175509-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531175509-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-ff6gx                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531175509-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f] <==
	* {"level":"info","ts":"2022-05-31T17:55:31.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:31.829Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220531175509-6903 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:56:05.802Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"200.644923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-05-31T17:56:05.802Z","caller":"traceutil/trace.go:171","msg":"trace[1170885200] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:476; }","duration":"200.814942ms","start":"2022-05-31T17:56:05.602Z","end":"2022-05-31T17:56:05.802Z","steps":["trace[1170885200] 'agreement among raft nodes before linearized reading'  (duration: 97.859628ms)","trace[1170885200] 'range keys from in-memory index tree'  (duration: 102.728736ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"169.329455ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638328710165085387 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:476 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414956673310309577 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[343527482] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"197.187499ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[343527482] 'read index received'  (duration: 26.832028ms)","trace[343527482] 'applied index is now lower than readState.Index'  (duration: 170.353994ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"197.426091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[1430337056] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:478; }","duration":"197.45664ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[1430337056] 'agreement among raft nodes before linearized reading'  (duration: 197.296156ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:15.763Z","caller":"traceutil/trace.go:171","msg":"trace[1158323802] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"230.143408ms","start":"2022-05-31T17:56:15.532Z","end":"2022-05-31T17:56:15.763Z","steps":["trace[1158323802] 'process raft request'  (duration: 59.357333ms)","trace[1158323802] 'compare'  (duration: 168.812361ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:16.147Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.435587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:56:16.147Z","caller":"traceutil/trace.go:171","msg":"trace[234350805] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:478; }","duration":"275.522421ms","start":"2022-05-31T17:56:15.872Z","end":"2022-05-31T17:56:16.147Z","steps":["trace[234350805] 'range keys from in-memory index tree'  (duration: 275.375333ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:59:31.188Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.089567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:59:31.188Z","caller":"traceutil/trace.go:171","msg":"trace[1870884025] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:559; }","duration":"122.184032ms","start":"2022-05-31T17:59:31.066Z","end":"2022-05-31T17:59:31.188Z","steps":["trace[1870884025] 'range keys from in-memory index tree'  (duration: 121.959844ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:05:31.845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":582}
	{"level":"info","ts":"2022-05-31T18:05:31.846Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":582,"took":"480.695µs"}
	
	* 
	* ==> kernel <==
	*  18:07:55 up  1:50,  0 users,  load average: 0.80, 0.67, 1.15
	Linux default-k8s-different-port-20220531175509-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11] <==
	* I0531 17:55:34.101758       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:55:34.111206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:55:34.111380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:55:34.111710       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:55:34.111829       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:55:34.119947       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 17:55:34.997992       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:55:34.998017       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:55:35.015412       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:55:35.019403       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:55:35.019422       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:55:35.375475       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:55:35.417331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:55:35.533778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:55:35.540935       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0531 17:55:35.541792       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:55:35.545091       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:55:36.131709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:55:36.909454       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:55:36.916783       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:55:36.925822       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:55:42.014482       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:55:51.091344       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:55:51.190456       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:55:52.128829       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999] <==
	* I0531 17:55:50.394877       1 shared_informer.go:247] Caches are synced for HPA 
	I0531 17:55:50.406053       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:55:50.437491       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:55:50.438548       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:55:50.438580       1 shared_informer.go:247] Caches are synced for GC 
	I0531 17:55:50.438605       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:55:50.438586       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 17:55:50.438629       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:55:50.438646       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0531 17:55:50.537706       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:55:50.537739       1 disruption.go:371] Sending events to api server.
	I0531 17:55:50.542100       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.546629       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.574949       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:55:50.588247       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:55:50.965267       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037122       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037154       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:55:51.095058       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:55:51.107553       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:55:51.196401       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vdbp9"
	I0531 17:55:51.200003       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff6gx"
	I0531 17:55:51.342466       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z47gr"
	I0531 17:55:51.346589       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-92zgx"
	I0531 17:55:51.362421       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z47gr"
	
	* 
	* ==> kube-proxy [cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783] <==
	* I0531 17:55:52.033542       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0531 17:55:52.033619       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0531 17:55:52.033664       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:55:52.125079       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:55:52.125116       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:55:52.125125       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:55:52.125149       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:55:52.125539       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:55:52.126126       1 config.go:317] "Starting service config controller"
	I0531 17:55:52.126162       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:55:52.126352       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:55:52.126370       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:55:52.227300       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:55:52.227972       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb] <==
	* W0531 17:55:34.201559       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:55:34.201633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:55:34.201879       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:55:34.202010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:55:34.202066       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202150       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202470       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:55:34.202627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:55:34.202947       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:55:34.203128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:55:34.204109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:34.204191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:55:34.204440       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:55:34.204494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:55:35.025337       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:55:35.025375       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:55:35.045433       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:55:35.045468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:55:35.202591       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:55:35.202639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:35.202763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:35.202795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 17:55:37.118161       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 18:07:55 UTC. --
	May 31 18:06:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:06:52.365657    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:06:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:06:55.028868    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:06:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:06:55.029198    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:06:57 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:06:57.366480    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:02 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:02.367936    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:06 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:06.029241    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:06 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:06.029647    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:07 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:07.368899    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:12 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:12.369643    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:17.029395    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:17.029877    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:17.370433    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:22 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:22.371847    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:27 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:27.373452    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:29.028958    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:29.029240    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:32 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:32.374707    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:37 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:37.375867    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:40.029461    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:40.029827    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:42 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:42.377199    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:47 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:47.378213    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:52.379654    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:55.028844    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:55.029227    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-92zgx storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner: exit status 1 (55.345832ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68wn9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-68wn9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-92zgx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531175509-6903
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531175509-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a",
	        "Created": "2022-05-31T17:55:17.80847266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238395,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:55:18.158165808Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hosts",
	        "LogPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a-json.log",
	        "Name": "/default-k8s-different-port-20220531175509-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531175509-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531175509-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531175509-6903",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531175509-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531175509-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8eaff00a202d06cce1c8d58235602194947fd26c7a48f709899b5f65739bc85",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d8eaff00a202",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531175509-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b24400321365",
	                        "default-k8s-different-port-20220531175509-6903"
	                    ],
	                    "NetworkID": "6fc1f79f54eab1e8df36883c8283b483c18aa0e383b30bdb7aa37eb035c0586e",
	                    "EndpointID": "2b828753c599e8680fae2d033551c2f135b67a4addb875098c2181def1415f01",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220531175509-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:06:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:06:31.856563  261225 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:06:31.856712  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856722  261225 out.go:309] Setting ErrFile to fd 2...
	I0531 18:06:31.856727  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856832  261225 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:06:31.857034  261225 out.go:303] Setting JSON to false
	I0531 18:06:31.858042  261225 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6543,"bootTime":1654013849,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:06:31.858099  261225 start.go:125] virtualization: kvm guest
	I0531 18:06:31.860371  261225 out.go:177] * [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:06:31.861722  261225 notify.go:193] Checking for updates...
	I0531 18:06:31.861741  261225 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:06:31.863130  261225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:06:31.864624  261225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:31.865934  261225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:06:31.867316  261225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:06:31.868940  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:31.869397  261225 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:06:31.907400  261225 docker.go:137] docker version: linux-20.10.16
	I0531 18:06:31.907473  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.005401  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:31.935579157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.005490  261225 docker.go:254] overlay module found
	I0531 18:06:32.008184  261225 out.go:177] * Using the docker driver based on existing profile
	I0531 18:06:32.009506  261225 start.go:284] selected driver: docker
	I0531 18:06:32.009519  261225 start.go:806] validating driver "docker" against &{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.009608  261225 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:06:32.010442  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.108530  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:32.039046549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.108794  261225 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:06:32.108817  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:32.108827  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:32.108849  261225 start_flags.go:306] config:
	{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.111037  261225 out.go:177] * Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	I0531 18:06:32.112380  261225 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:06:32.113769  261225 out.go:177] * Pulling base image ...
	I0531 18:06:32.115200  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:32.115228  261225 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:06:32.115343  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.115478  261225 cache.go:107] acquiring lock: {Name:mke7c3123bbb887802876b6038e785eff1d65578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115516  261225 cache.go:107] acquiring lock: {Name:mkccfd735c16da1ed9ea4fc459feb477365b33a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115520  261225 cache.go:107] acquiring lock: {Name:mk598b9f501113e758a5b1053c8a9a41e87e7c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115517  261225 cache.go:107] acquiring lock: {Name:mk92196aa514c10ef84dd2326a35399f7c3719a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115545  261225 cache.go:107] acquiring lock: {Name:mk59854aac2611f794ffa59524077b81afbc7de4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115552  261225 cache.go:107] acquiring lock: {Name:mk37d69d4525de4b98ff3597b4269e1680132b96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115480  261225 cache.go:107] acquiring lock: {Name:mka8d6fd8013f251c85f4bca8a18522e173be81e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115558  261225 cache.go:107] acquiring lock: {Name:mk4a95c9ed8757a79d1e9fa1e44efcaead7631e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115785  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 18:06:32.115815  261225 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 348.663µs
	I0531 18:06:32.115829  261225 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 18:06:32.115875  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0531 18:06:32.115877  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0531 18:06:32.115899  261225 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 392.805µs
	I0531 18:06:32.115911  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0531 18:06:32.115912  261225 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0531 18:06:32.115913  261225 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 404.132µs
	I0531 18:06:32.115930  261225 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 399.123µs
	I0531 18:06:32.115947  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0531 18:06:32.115972  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0531 18:06:32.115973  261225 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 444.025µs
	I0531 18:06:32.115992  261225 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 526.55µs
	I0531 18:06:32.115932  261225 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0531 18:06:32.115998  261225 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0531 18:06:32.116024  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0531 18:06:32.115948  261225 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0531 18:06:32.115887  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0531 18:06:32.116038  261225 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 484.283µs
	I0531 18:06:32.116056  261225 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0531 18:06:32.116054  261225 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 533.964µs
	I0531 18:06:32.116074  261225 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0531 18:06:32.116007  261225 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0531 18:06:32.116089  261225 cache.go:87] Successfully saved all images to host disk.
	I0531 18:06:32.161016  261225 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:06:32.161038  261225 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:06:32.161053  261225 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:06:32.161092  261225 start.go:352] acquiring machines lock for no-preload-20220531175323-6903: {Name:mk8635283b759be2fcd7aacbafc64b0c778ff5b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.161181  261225 start.go:356] acquired machines lock for "no-preload-20220531175323-6903" in 68.368µs
	I0531 18:06:32.161203  261225 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:06:32.161208  261225 fix.go:55] fixHost starting: 
	I0531 18:06:32.161424  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.191567  261225 fix.go:103] recreateIfNeeded on no-preload-20220531175323-6903: state=Stopped err=<nil>
	W0531 18:06:32.191592  261225 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:06:32.194700  261225 out.go:177] * Restarting existing docker container for "no-preload-20220531175323-6903" ...
	I0531 18:06:32.196063  261225 cli_runner.go:164] Run: docker start no-preload-20220531175323-6903
	I0531 18:06:32.572533  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.606201  261225 kic.go:416] container "no-preload-20220531175323-6903" state is running.
	I0531 18:06:32.606544  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:32.637813  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.637995  261225 machine.go:88] provisioning docker machine ...
	I0531 18:06:32.638016  261225 ubuntu.go:169] provisioning hostname "no-preload-20220531175323-6903"
	I0531 18:06:32.638050  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:32.668506  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:32.668682  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:32.668704  261225 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220531175323-6903 && echo "no-preload-20220531175323-6903" | sudo tee /etc/hostname
	I0531 18:06:32.669243  261225 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46970->127.0.0.1:49432: read: connection reset by peer
	I0531 18:06:35.786250  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220531175323-6903
	
	I0531 18:06:35.786326  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:35.821236  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:35.821365  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:35.821383  261225 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220531175323-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220531175323-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220531175323-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:06:35.934343  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:06:35.934366  261225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:06:35.934410  261225 ubuntu.go:177] setting up certificates
	I0531 18:06:35.934428  261225 provision.go:83] configureAuth start
	I0531 18:06:35.934476  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:35.965223  261225 provision.go:138] copyHostCerts
	I0531 18:06:35.965272  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:06:35.965282  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:06:35.965344  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:06:35.965427  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:06:35.965439  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:06:35.965462  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:06:35.965511  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:06:35.965519  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:06:35.965539  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:06:35.965578  261225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220531175323-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220531175323-6903]
	I0531 18:06:36.057355  261225 provision.go:172] copyRemoteCerts
	I0531 18:06:36.057402  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:06:36.057430  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.089999  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.169898  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0531 18:06:36.186339  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:06:36.202145  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:06:36.217945  261225 provision.go:86] duration metric: configureAuth took 283.507566ms
	I0531 18:06:36.217967  261225 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:06:36.218141  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:36.218159  261225 machine.go:91] provisioned docker machine in 3.58014978s
	I0531 18:06:36.218168  261225 start.go:306] post-start starting for "no-preload-20220531175323-6903" (driver="docker")
	I0531 18:06:36.218179  261225 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:06:36.218216  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:06:36.218249  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.250462  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.329903  261225 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:06:36.332443  261225 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:06:36.332472  261225 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:06:36.332481  261225 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:06:36.332487  261225 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:06:36.332499  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:06:36.332539  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:06:36.332602  261225 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:06:36.332675  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:06:36.338862  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:36.355254  261225 start.go:309] post-start completed in 137.071829ms
	I0531 18:06:36.355304  261225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:06:36.355336  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.386735  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.467076  261225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:06:36.470822  261225 fix.go:57] fixHost completed within 4.309609112s
	I0531 18:06:36.470844  261225 start.go:81] releasing machines lock for "no-preload-20220531175323-6903", held for 4.309648254s
	I0531 18:06:36.470905  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:36.502427  261225 ssh_runner.go:195] Run: systemctl --version
	I0531 18:06:36.502473  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.502475  261225 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:06:36.502528  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.537057  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.539320  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.638776  261225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:06:36.649832  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:06:36.658496  261225 docker.go:187] disabling docker service ...
	I0531 18:06:36.658539  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:06:36.667272  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:06:36.675216  261225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:06:36.752203  261225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:06:36.818959  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:06:36.827401  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:06:36.839221  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:06:36.851589  261225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:06:36.857335  261225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:06:36.865201  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:06:36.934383  261225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:06:37.001672  261225 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:06:37.001743  261225 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:06:37.005089  261225 start.go:468] Will wait 60s for crictl version
	I0531 18:06:37.005161  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:37.030007  261225 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:06:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:06:48.077720  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:48.100248  261225 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:06:48.100298  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.127707  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.157240  261225 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:06:48.158764  261225 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:06:48.189984  261225 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 18:06:48.193238  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.203917  261225 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:06:48.205236  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:48.205283  261225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:06:48.227240  261225 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:06:48.227263  261225 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:06:48.227305  261225 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:06:48.249494  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:48.249514  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:48.249533  261225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:06:48.249549  261225 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220531175323-6903 NodeName:no-preload-20220531175323-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:06:48.249720  261225 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220531175323-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:06:48.249812  261225 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220531175323-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:06:48.249865  261225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:06:48.256345  261225 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:06:48.256398  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:06:48.262969  261225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0531 18:06:48.274664  261225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:06:48.287040  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0531 18:06:48.299091  261225 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:06:48.301889  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.310656  261225 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903 for IP: 192.168.67.2
	I0531 18:06:48.310742  261225 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:06:48.310777  261225 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:06:48.310834  261225 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key
	I0531 18:06:48.310884  261225 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e
	I0531 18:06:48.310918  261225 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key
	I0531 18:06:48.310996  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:06:48.311025  261225 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:06:48.311034  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:06:48.311059  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:06:48.311084  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:06:48.311106  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:06:48.311181  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:48.311875  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:06:48.328351  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:06:48.344708  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:06:48.361384  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:06:48.377622  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:06:48.393772  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:06:48.409607  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:06:48.425962  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:06:48.441752  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:06:48.457422  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:06:48.473322  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:06:48.489365  261225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:06:48.501512  261225 ssh_runner.go:195] Run: openssl version
	I0531 18:06:48.505937  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:06:48.512677  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515513  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515567  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.520028  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:06:48.526318  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:06:48.533197  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536004  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536048  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.540484  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:06:48.546655  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:06:48.553433  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556334  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556368  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.560699  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:06:48.566833  261225 kubeadm.go:395] StartCluster: {Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:48.566936  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:06:48.566963  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:48.590607  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:48.590629  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:48.590640  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:48.590651  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:48.590665  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:48.590677  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:48.590684  261225 cri.go:87] found id: ""
	I0531 18:06:48.590707  261225 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:06:48.601948  261225 cri.go:114] JSON = null
	W0531 18:06:48.601985  261225 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:06:48.602021  261225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:06:48.608119  261225 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:06:48.608137  261225 kubeadm.go:626] restartCluster start
	I0531 18:06:48.608162  261225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:06:48.613826  261225 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.614554  261225 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220531175323-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:48.615039  261225 kubeconfig.go:127] "no-preload-20220531175323-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:06:48.615784  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:06:48.617278  261225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:06:48.623232  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.623290  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.630395  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.830763  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.830820  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.838930  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.031184  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.031241  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.039494  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.230727  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.230797  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.239312  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.430567  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.430662  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.438967  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.631308  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.631386  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.640008  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.831278  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.831352  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.839490  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.030797  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.030869  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.039659  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.230992  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.231065  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.239370  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.430595  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.430703  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.438937  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.631190  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.631256  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.639827  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.831099  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.831190  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.839564  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.030836  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.030912  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.039250  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.230475  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.230546  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.238738  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.431028  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.431083  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.439535  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.630862  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.630914  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.639047  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.639064  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.639103  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.646503  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.646523  261225 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:06:51.646531  261225 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:06:51.646545  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:06:51.646589  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:51.669569  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:51.669588  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:51.669595  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:51.669601  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:51.669608  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:51.669617  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:51.669633  261225 cri.go:87] found id: ""
	I0531 18:06:51.669640  261225 cri.go:232] Stopping containers: [ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66]
	I0531 18:06:51.669675  261225 ssh_runner.go:195] Run: which crictl
	I0531 18:06:51.672277  261225 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66
	I0531 18:06:51.696665  261225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:06:51.706131  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:06:51.712590  261225 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:53 /etc/kubernetes/scheduler.conf
	
	I0531 18:06:51.712632  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:06:51.718730  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:06:51.724887  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.731013  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.731060  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.737056  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:06:51.743102  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.743164  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:06:51.748937  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755338  261225 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755353  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:51.795954  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.528000  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.654713  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.709489  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.750049  261225 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:06:52.750109  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.257876  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.757835  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.257770  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.757793  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.258138  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.757795  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.258203  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.758036  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.257882  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.757890  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.258306  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.758044  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.810957  261225 api_server.go:71] duration metric: took 6.060906737s to wait for apiserver process to appear ...
	I0531 18:06:58.810993  261225 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:06:58.811006  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:06:58.811421  261225 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0531 18:06:59.312100  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.519859  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:07:01.519904  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:07:01.812506  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.816767  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:01.816787  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.312284  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.316938  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:02.316963  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.812304  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.817359  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 18:07:02.822648  261225 api_server.go:140] control plane version: v1.23.6
	I0531 18:07:02.822669  261225 api_server.go:130] duration metric: took 4.011670774s to wait for apiserver health ...
	I0531 18:07:02.822682  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:07:02.822688  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:07:02.825359  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:07:02.826864  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:07:02.830365  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:07:02.830389  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:07:02.844337  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:07:03.565042  261225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:07:03.571091  261225 system_pods.go:59] 9 kube-system pods found
	I0531 18:07:03.571119  261225 system_pods.go:61] "coredns-64897985d-8cptk" [b7548080-9210-497c-9a72-e3d0dc790731] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571127  261225 system_pods.go:61] "etcd-no-preload-20220531175323-6903" [0c3833e1-4748-46be-b9f9-ba9743784100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:07:03.571136  261225 system_pods.go:61] "kindnet-n856k" [1bf232e0-3302-4413-8693-378d7bcc2bad] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:07:03.571183  261225 system_pods.go:61] "kube-apiserver-no-preload-20220531175323-6903" [a04b08e1-09a2-4700-97ef-1d46decd0195] Running
	I0531 18:07:03.571194  261225 system_pods.go:61] "kube-controller-manager-no-preload-20220531175323-6903" [fc4e03c4-6dfa-492c-b27f-80c7dde0de7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:07:03.571207  261225 system_pods.go:61] "kube-proxy-8szbz" [e7e66d9f-358e-4d5f-b12d-541da7f43741] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:07:03.571216  261225 system_pods.go:61] "kube-scheduler-no-preload-20220531175323-6903" [5399c2c9-e9f9-4208-9bd3-f922cc3f4f6b] Running
	I0531 18:07:03.571224  261225 system_pods.go:61] "metrics-server-b955d9d8-bsgtk" [5c43931e-ba07-4e57-b438-73e230ac2391] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571230  261225 system_pods.go:61] "storage-provisioner" [a98841d0-cbd8-464c-b5bc-542abbaf8a0b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571237  261225 system_pods.go:74] duration metric: took 6.174332ms to wait for pod list to return data ...
	I0531 18:07:03.571248  261225 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:07:03.573670  261225 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:07:03.573690  261225 node_conditions.go:123] node cpu capacity is 8
	I0531 18:07:03.573700  261225 node_conditions.go:105] duration metric: took 2.442916ms to run NodePressure ...
	I0531 18:07:03.573714  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:07:03.691657  261225 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695473  261225 kubeadm.go:777] kubelet initialised
	I0531 18:07:03.695496  261225 kubeadm.go:778] duration metric: took 3.812908ms waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695502  261225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:07:03.699872  261225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	I0531 18:07:05.705225  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:08.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:10.205511  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:12.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:15.204717  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:17.204780  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:19.205209  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:21.205381  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:23.704908  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:26.205961  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:28.705082  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:31.205047  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:33.205742  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:35.705103  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:38.205545  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:40.206261  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:42.704687  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:44.705052  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:47.205179  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:49.205593  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:51.704646  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	52b0fa46cdf51       6de166512aa22       5 minutes ago       Exited              kindnet-cni               6                   512d6145343b2
	cb3e6f9b5d67c       4c03754524064       12 minutes ago      Running             kube-proxy                0                   95cdf505c32bc
	a2c6538b95f74       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   1f2c20e63b683
	1b1996168f6e9       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   6051433bcfd54
	509e04aaab068       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   de31468fb264b
	ea294bc0a9be2       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   3eec3f7ca8031
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 18:07:57 UTC. --
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436395795Z" level=warning msg="cleaning up after shim disconnected" id=5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34 namespace=k8s.io
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.436406915Z" level=info msg="cleaning up dead shim"
	May 31 17:58:23 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:23.445684136Z" level=warning msg="cleanup warnings time=\"2022-05-31T17:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2358 runtime=io.containerd.runc.v2\n"
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.415891765Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\""
	May 31 17:58:24 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:58:24.419889760Z" level=info msg="RemoveContainer for \"55f0b6848e2ed85de00ff76b8f1ec446075d27fb47b030941616f5ee2bea7725\" returns successfully"
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.032279346Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.045343277Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.046165472Z" level=info msg="StartContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 17:59:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T17:59:53.206457331Z" level=info msg="StartContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\" returns successfully"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450804664Z" level=info msg="shim disconnected" id=42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450859652Z" level=warning msg="cleaning up after shim disconnected" id=42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042 namespace=k8s.io
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.450873065Z" level=info msg="cleaning up dead shim"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.460091371Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2682 runtime=io.containerd.runc.v2\n"
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.593344444Z" level=info msg="RemoveContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\""
	May 31 18:00:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:00:03.597811596Z" level=info msg="RemoveContainer for \"5463b775117698a3eda0f275e365e5713cb794b3e1cca54ef22e16e76308fb34\" returns successfully"
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.031220176Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.042697798Z" level=info msg="CreateContainer within sandbox \"512d6145343b2b216a87131ca89ab39a38814c321a433fcbf4bf9f5028e20ec8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\""
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.043204599Z" level=info msg="StartContainer for \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\""
	May 31 18:02:53 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:02:53.120003489Z" level=info msg="StartContainer for \"52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463\" returns successfully"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340195220Z" level=info msg="shim disconnected" id=52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340258399Z" level=warning msg="cleaning up after shim disconnected" id=52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 namespace=k8s.io
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.340274379Z" level=info msg="cleaning up dead shim"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.348849445Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:03:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2773 runtime=io.containerd.runc.v2\n"
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.904094952Z" level=info msg="RemoveContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\""
	May 31 18:03:03 default-k8s-different-port-20220531175509-6903 containerd[502]: time="2022-05-31T18:03:03.908188119Z" level=info msg="RemoveContainer for \"42f666882bdbce015af23c9003dea2876f79b5da2faa102cb83f0f34991cd042\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531175509-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531175509-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_55_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:55:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531175509-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:07:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:06:05 +0000   Tue, 31 May 2022 17:55:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220531175509-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                6be22935-bf30-494f-8e0a-066b777ef988
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220531175509-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-vdbp9                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531175509-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531175509-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-ff6gx                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531175509-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 12m   kube-proxy  
	  Normal  Starting                 12m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f] <==
	* {"level":"info","ts":"2022-05-31T17:55:31.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T17:55:31.829Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220531175509-6903 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.830Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:55:31.831Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2022-05-31T17:56:05.802Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"200.644923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-05-31T17:56:05.802Z","caller":"traceutil/trace.go:171","msg":"trace[1170885200] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:476; }","duration":"200.814942ms","start":"2022-05-31T17:56:05.602Z","end":"2022-05-31T17:56:05.802Z","steps":["trace[1170885200] 'agreement among raft nodes before linearized reading'  (duration: 97.859628ms)","trace[1170885200] 'range keys from in-memory index tree'  (duration: 102.728736ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"169.329455ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638328710165085387 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:476 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414956673310309577 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[343527482] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"197.187499ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[343527482] 'read index received'  (duration: 26.832028ms)","trace[343527482] 'applied index is now lower than readState.Index'  (duration: 170.353994ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:15.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"197.426091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:56:15.762Z","caller":"traceutil/trace.go:171","msg":"trace[1430337056] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:478; }","duration":"197.45664ms","start":"2022-05-31T17:56:15.565Z","end":"2022-05-31T17:56:15.762Z","steps":["trace[1430337056] 'agreement among raft nodes before linearized reading'  (duration: 197.296156ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:56:15.763Z","caller":"traceutil/trace.go:171","msg":"trace[1158323802] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"230.143408ms","start":"2022-05-31T17:56:15.532Z","end":"2022-05-31T17:56:15.763Z","steps":["trace[1158323802] 'process raft request'  (duration: 59.357333ms)","trace[1158323802] 'compare'  (duration: 168.812361ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:56:16.147Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.435587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:56:16.147Z","caller":"traceutil/trace.go:171","msg":"trace[234350805] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:478; }","duration":"275.522421ms","start":"2022-05-31T17:56:15.872Z","end":"2022-05-31T17:56:16.147Z","steps":["trace[234350805] 'range keys from in-memory index tree'  (duration: 275.375333ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:59:31.188Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.089567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-different-port-20220531175509-6903\" ","response":"range_response_count:1 size:4921"}
	{"level":"info","ts":"2022-05-31T17:59:31.188Z","caller":"traceutil/trace.go:171","msg":"trace[1870884025] range","detail":"{range_begin:/registry/minions/default-k8s-different-port-20220531175509-6903; range_end:; response_count:1; response_revision:559; }","duration":"122.184032ms","start":"2022-05-31T17:59:31.066Z","end":"2022-05-31T17:59:31.188Z","steps":["trace[1870884025] 'range keys from in-memory index tree'  (duration: 121.959844ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T18:05:31.845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":582}
	{"level":"info","ts":"2022-05-31T18:05:31.846Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":582,"took":"480.695µs"}
	
	* 
	* ==> kernel <==
	*  18:07:57 up  1:50,  0 users,  load average: 0.80, 0.67, 1.15
	Linux default-k8s-different-port-20220531175509-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11] <==
	* I0531 17:55:34.101758       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:55:34.111206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:55:34.111380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:55:34.111710       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:55:34.111829       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:55:34.119947       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 17:55:34.997992       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:55:34.998017       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:55:35.015412       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:55:35.019403       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:55:35.019422       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:55:35.375475       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:55:35.417331       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:55:35.533778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:55:35.540935       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0531 17:55:35.541792       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:55:35.545091       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:55:36.131709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:55:36.909454       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:55:36.916783       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:55:36.925822       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:55:42.014482       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:55:51.091344       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:55:51.190456       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:55:52.128829       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999] <==
	* I0531 17:55:50.394877       1 shared_informer.go:247] Caches are synced for HPA 
	I0531 17:55:50.406053       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:55:50.437491       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:55:50.438548       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:55:50.438580       1 shared_informer.go:247] Caches are synced for GC 
	I0531 17:55:50.438605       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:55:50.438586       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 17:55:50.438629       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:55:50.438646       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0531 17:55:50.537706       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:55:50.537739       1 disruption.go:371] Sending events to api server.
	I0531 17:55:50.542100       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.546629       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:55:50.574949       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0531 17:55:50.588247       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:55:50.965267       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037122       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:55:51.037154       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:55:51.095058       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:55:51.107553       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:55:51.196401       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vdbp9"
	I0531 17:55:51.200003       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff6gx"
	I0531 17:55:51.342466       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z47gr"
	I0531 17:55:51.346589       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-92zgx"
	I0531 17:55:51.362421       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z47gr"
	
	* 
	* ==> kube-proxy [cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783] <==
	* I0531 17:55:52.033542       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0531 17:55:52.033619       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0531 17:55:52.033664       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:55:52.125079       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:55:52.125116       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:55:52.125125       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:55:52.125149       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:55:52.125539       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:55:52.126126       1 config.go:317] "Starting service config controller"
	I0531 17:55:52.126162       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:55:52.126352       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:55:52.126370       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:55:52.227300       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:55:52.227972       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb] <==
	* W0531 17:55:34.201559       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:55:34.201633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:55:34.201879       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:55:34.202010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:55:34.202066       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202150       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:55:34.202150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:34.202470       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:55:34.202627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:55:34.202947       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:55:34.203128       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:55:34.204109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:34.204191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:55:34.204440       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:55:34.204494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:55:35.025337       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:55:35.025375       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:55:35.045433       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:55:35.045468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:55:35.202591       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:55:35.202639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:55:35.202763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:55:35.202795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 17:55:37.118161       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:55:18 UTC, end at Tue 2022-05-31 18:07:57 UTC. --
	May 31 18:06:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:06:55.028868    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:06:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:06:55.029198    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:06:57 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:06:57.366480    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:02 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:02.367936    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:06 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:06.029241    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:06 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:06.029647    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:07 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:07.368899    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:12 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:12.369643    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:17.029395    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:17.029877    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:17 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:17.370433    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:22 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:22.371847    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:27 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:27.373452    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:29.028958    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:29 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:29.029240    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:32 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:32.374707    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:37 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:37.375867    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:40.029461    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:40 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:40.029827    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:42 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:42.377199    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:47 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:47.378213    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:52 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:52.379654    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: I0531 18:07:55.028844    1298 scope.go:110] "RemoveContainer" containerID="52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	May 31 18:07:55 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:55.029227    1298 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-vdbp9_kube-system(79d3fb6a-0f34-4e42-809a-d4b9107ab071)\"" pod="kube-system/kindnet-vdbp9" podUID=79d3fb6a-0f34-4e42-809a-d4b9107ab071
	May 31 18:07:57 default-k8s-different-port-20220531175509-6903 kubelet[1298]: E0531 18:07:57.381168    1298 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-92zgx storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner: exit status 1 (54.513936ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68wn9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-68wn9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  51s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-92zgx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod busybox coredns-64897985d-92zgx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (484.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f7a79a99-ec6a-4599-a8bc-6b8b34d2ea12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 18:01:05.001890    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:198: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
start_stop_delete_test.go:198: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2022-05-31 18:08:52.856832899 +0000 UTC m=+3384.381979167
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe po busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context embed-certs-20220531175604-6903 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hphtm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-hphtm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  45s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 logs busybox -n default
start_stop_delete_test.go:198: (dbg) kubectl --context embed-certs-20220531175604-6903 logs busybox -n default:
start_stop_delete_test.go:198: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531175604-6903
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531175604-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f",
	        "Created": "2022-05-31T17:56:17.948185818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:56:18.300730024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f-json.log",
	        "Name": "/embed-certs-20220531175604-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531175604-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531175604-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531175604-6903",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531175604-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531175604-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11a6b63b5abe8f9c9428988cf4db6f03035277ca15e61a9acec7f8823d618698",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/11a6b63b5abe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531175604-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8a0a6250b5",
	                        "embed-certs-20220531175604-6903"
	                    ],
	                    "NetworkID": "810e286ea2469d855f00ec56445da0705b1ca1a44b439a6e099264f06730a27d",
	                    "EndpointID": "d7bf905c93b04663aeaeb7c5b125cdceaf3e7b5b400379603ca717422c8036ad",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220531175604-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:08:08
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:08:08.309660  265084 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:08:08.309791  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309803  265084 out.go:309] Setting ErrFile to fd 2...
	I0531 18:08:08.309815  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309926  265084 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:08:08.310162  265084 out.go:303] Setting JSON to false
	I0531 18:08:08.311302  265084 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6639,"bootTime":1654013849,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:08:08.311358  265084 start.go:125] virtualization: kvm guest
	I0531 18:08:08.313832  265084 out.go:177] * [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:08:08.315362  265084 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:08:08.315369  265084 notify.go:193] Checking for updates...
	I0531 18:08:08.316763  265084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:08:08.318244  265084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:08.319779  265084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:08:08.321340  265084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:08:08.323191  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:08.323603  265084 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:08:08.362346  265084 docker.go:137] docker version: linux-20.10.16
	I0531 18:08:08.362439  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.463602  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.390259074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.463698  265084 docker.go:254] overlay module found
	I0531 18:08:08.465780  265084 out.go:177] * Using the docker driver based on existing profile
	I0531 18:08:08.467039  265084 start.go:284] selected driver: docker
	I0531 18:08:08.467049  265084 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.467161  265084 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:08:08.468025  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.564858  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.495990048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.565102  265084 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:08:08.565123  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:08.565130  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:08.565142  265084 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.567536  265084 out.go:177] * Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	I0531 18:08:08.568944  265084 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:08:08.570365  265084 out.go:177] * Pulling base image ...
	I0531 18:08:08.571649  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:08.571672  265084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:08:08.571689  265084 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:08:08.571699  265084 cache.go:57] Caching tarball of preloaded images
	I0531 18:08:08.571897  265084 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:08:08.571914  265084 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:08:08.572029  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:08.619058  265084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:08:08.619084  265084 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:08:08.619096  265084 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:08:08.619126  265084 start.go:352] acquiring machines lock for default-k8s-different-port-20220531175509-6903: {Name:mk53f02aa9701786e51ee0c8a5d73dcf46801d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:08:08.619250  265084 start.go:356] acquired machines lock for "default-k8s-different-port-20220531175509-6903" in 60.577µs
	I0531 18:08:08.619274  265084 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:08:08.619282  265084 fix.go:55] fixHost starting: 
	I0531 18:08:08.619518  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:08.649852  265084 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531175509-6903: state=Stopped err=<nil>
	W0531 18:08:08.649892  265084 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:08:08.651929  265084 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531175509-6903" ...
	I0531 18:08:08.706025  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:11.205176  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:08.653246  265084 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.036886  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:09.070303  265084 kic.go:416] container "default-k8s-different-port-20220531175509-6903" state is running.
	I0531 18:08:09.070670  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.103605  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:09.103829  265084 machine.go:88] provisioning docker machine ...
	I0531 18:08:09.103858  265084 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531175509-6903"
	I0531 18:08:09.103909  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.134428  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:09.134578  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:09.134603  265084 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531175509-6903 && echo "default-k8s-different-port-20220531175509-6903" | sudo tee /etc/hostname
	I0531 18:08:09.135241  265084 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37084->127.0.0.1:49437: read: connection reset by peer
	I0531 18:08:12.259673  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531175509-6903
	
	I0531 18:08:12.259750  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.291506  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:12.291664  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:12.291697  265084 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531175509-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531175509-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531175509-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:08:12.398559  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:08:12.398585  265084 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:08:12.398600  265084 ubuntu.go:177] setting up certificates
	I0531 18:08:12.398609  265084 provision.go:83] configureAuth start
	I0531 18:08:12.398666  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.431013  265084 provision.go:138] copyHostCerts
	I0531 18:08:12.431073  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:08:12.431088  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:08:12.431178  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:08:12.431291  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:08:12.431308  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:08:12.431354  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:08:12.431426  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:08:12.431439  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:08:12.431471  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:08:12.431572  265084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531175509-6903 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531175509-6903]
	I0531 18:08:12.598055  265084 provision.go:172] copyRemoteCerts
	I0531 18:08:12.598106  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:08:12.598136  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.631111  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.714018  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 18:08:12.731288  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:08:12.747333  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:08:12.763254  265084 provision.go:86] duration metric: configureAuth took 364.63384ms
	I0531 18:08:12.763282  265084 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:08:12.763474  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:12.763490  265084 machine.go:91] provisioned docker machine in 3.659644302s
	I0531 18:08:12.763497  265084 start.go:306] post-start starting for "default-k8s-different-port-20220531175509-6903" (driver="docker")
	I0531 18:08:12.763505  265084 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:08:12.763543  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:08:12.763579  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.795235  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.873714  265084 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:08:12.876227  265084 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:08:12.876248  265084 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:08:12.876257  265084 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:08:12.876262  265084 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:08:12.876270  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:08:12.876309  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:08:12.876369  265084 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:08:12.876457  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:08:12.882555  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:12.898406  265084 start.go:309] post-start completed in 134.899493ms
	I0531 18:08:12.898470  265084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:08:12.898502  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.929840  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.011134  265084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:08:13.014770  265084 fix.go:57] fixHost completed within 4.39548261s
	I0531 18:08:13.014795  265084 start.go:81] releasing machines lock for "default-k8s-different-port-20220531175509-6903", held for 4.3955315s
	I0531 18:08:13.014869  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046127  265084 ssh_runner.go:195] Run: systemctl --version
	I0531 18:08:13.046172  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046174  265084 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:08:13.046264  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.079038  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.079600  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.163089  265084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:08:13.184388  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:08:13.192965  265084 docker.go:187] disabling docker service ...
	I0531 18:08:13.193006  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:08:13.201843  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:08:13.209984  265084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:08:13.281373  265084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:08:13.705063  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:16.205156  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:13.351161  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:08:13.359679  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:08:13.371415  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:08:13.383601  265084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:08:13.389381  265084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:08:13.395293  265084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:08:13.467306  265084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:08:13.544767  265084 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:08:13.544838  265084 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:08:13.549043  265084 start.go:468] Will wait 60s for crictl version
	I0531 18:08:13.549097  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:13.581186  265084 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:08:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:08:18.704597  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:21.205707  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:24.627975  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:24.650848  265084 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:08:24.650905  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.677319  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.704802  265084 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:08:24.706277  265084 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:08:24.735854  265084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0531 18:08:24.738892  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.749702  265084 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:08:23.704978  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:25.705116  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:24.751112  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:24.751189  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.773113  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.773129  265084 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:08:24.773160  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.794357  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.794373  265084 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:08:24.794406  265084 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:08:24.815845  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:24.815862  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:24.815876  265084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:08:24.815892  265084 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531175509-6903 NodeName:default-k8s-different-port-20220531175509-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:08:24.816032  265084 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220531175509-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:08:24.816118  265084 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220531175509-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 18:08:24.816165  265084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:08:24.822458  265084 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:08:24.822505  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:08:24.828560  265084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0531 18:08:24.840392  265084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:08:24.851809  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0531 18:08:24.863569  265084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:08:24.866080  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.875701  265084 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903 for IP: 192.168.76.2
	I0531 18:08:24.875793  265084 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:08:24.875829  265084 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:08:24.875892  265084 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key
	I0531 18:08:24.875942  265084 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25
	I0531 18:08:24.875977  265084 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key
	I0531 18:08:24.876064  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:08:24.876092  265084 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:08:24.876104  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:08:24.876131  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:08:24.876152  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:08:24.876182  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:08:24.876220  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:24.876773  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:08:24.892395  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:08:24.907892  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:08:24.923592  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:08:24.939375  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:08:24.954761  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:08:24.970309  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:08:24.985770  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:08:25.002079  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:08:25.017430  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:08:25.032835  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:08:25.048607  265084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:08:25.059993  265084 ssh_runner.go:195] Run: openssl version
	I0531 18:08:25.064220  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:08:25.070801  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073567  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073614  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.077984  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:08:25.084035  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:08:25.090628  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093313  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093361  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.097725  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:08:25.103766  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:08:25.110369  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113149  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113180  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.117580  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:08:25.123696  265084 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:25.123784  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:08:25.123821  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:25.146591  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:25.146619  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:25.146630  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:25.146638  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:25.146644  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:25.146653  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:25.146661  265084 cri.go:87] found id: ""
	I0531 18:08:25.146697  265084 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:08:25.157547  265084 cri.go:114] JSON = null
	W0531 18:08:25.157585  265084 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:08:25.157630  265084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:08:25.163950  265084 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:08:25.163970  265084 kubeadm.go:626] restartCluster start
	I0531 18:08:25.163999  265084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:08:25.169734  265084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.170732  265084 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531175509-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:25.171454  265084 kubeconfig.go:127] "default-k8s-different-port-20220531175509-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:08:25.172470  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:08:25.173986  265084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:08:25.179942  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.179983  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.186965  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.387200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.387275  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.395846  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.587039  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.587105  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.595520  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.787853  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.787919  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.796087  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.987443  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.987592  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.995763  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.188042  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.188119  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.196440  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.387758  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.387821  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.395923  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.587200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.587257  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.595464  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.787757  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.787847  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.796141  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.987434  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.987519  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.995749  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.188036  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.188093  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.196930  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.387163  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.387241  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.395603  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.587873  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.587940  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.596151  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.787448  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.787529  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.795830  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.987084  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.987165  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.995664  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.187945  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.188030  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.196347  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.196370  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.196404  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.204102  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.204125  265084 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:08:28.204132  265084 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:08:28.204145  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:08:28.204198  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:28.227632  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:28.227660  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:28.227671  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:28.227679  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:28.227685  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:28.227691  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:28.227700  265084 cri.go:87] found id: ""
	I0531 18:08:28.227705  265084 cri.go:232] Stopping containers: [52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f]
	I0531 18:08:28.227754  265084 ssh_runner.go:195] Run: which crictl
	I0531 18:08:28.230377  265084 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f
	I0531 18:08:28.253379  265084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:08:28.263239  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:08:28.269611  265084 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 17:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:55 /etc/kubernetes/scheduler.conf
	
	I0531 18:08:28.269655  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 18:08:28.276169  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 18:08:28.282320  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.288727  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.288764  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.294577  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 18:08:28.300576  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.300611  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:08:28.306535  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:27.705434  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:30.205110  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:28.313163  265084 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:28.313181  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.354378  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.801587  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.930245  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.977387  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:29.027665  265084 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:08:29.027728  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:29.536233  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.036182  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.536067  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.035853  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.535756  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.036379  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.536341  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:33.036406  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.704857  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:34.705759  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:33.536689  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.036411  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.536112  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.036299  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.111746  265084 api_server.go:71] duration metric: took 6.084083791s to wait for apiserver process to appear ...
	I0531 18:08:35.111779  265084 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:08:35.111789  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:35.112142  265084 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0531 18:08:35.612870  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.446439  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:08:38.446468  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:08:38.612744  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.618441  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:38.618510  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.113066  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.117198  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.117223  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.612302  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.616945  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.616967  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:40.112481  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:40.117156  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0531 18:08:40.122751  265084 api_server.go:140] control plane version: v1.23.6
	I0531 18:08:40.122771  265084 api_server.go:130] duration metric: took 5.010986211s to wait for apiserver health ...
	I0531 18:08:40.122780  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:40.122788  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:40.124631  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:08:37.206138  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:39.207436  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:41.704753  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:40.125848  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:08:40.129376  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:08:40.129394  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:08:40.142078  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:08:40.734264  265084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:08:40.740842  265084 system_pods.go:59] 9 kube-system pods found
	I0531 18:08:40.740873  265084 system_pods.go:61] "coredns-64897985d-92zgx" [b91e17cd-2735-4a67-a78b-9f06d1ea411e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740885  265084 system_pods.go:61] "etcd-default-k8s-different-port-20220531175509-6903" [13ef129d-4fca-4990-84b0-03bfdcfabf1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:08:40.740894  265084 system_pods.go:61] "kindnet-vdbp9" [79d3fb6a-0f34-4e42-809a-d4b9107ab071] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:08:40.740901  265084 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531175509-6903" [a547e888-b760-4d90-8f4c-50685def1dd3] Running
	I0531 18:08:40.740916  265084 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531175509-6903" [b23304a6-b5b1-4237-bfbb-6029f2c79380] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:08:40.740923  265084 system_pods.go:61] "kube-proxy-ff6gx" [4d094300-69cc-429e-8b17-52f2ddb8b9c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:08:40.740933  265084 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531175509-6903" [c7f2ccba-dc09-41b5-815a-1d7e16814c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:08:40.740942  265084 system_pods.go:61] "metrics-server-b955d9d8-wvb9t" [f87f1c60-e753-4d02-8ae1-914a03b2b27a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740951  265084 system_pods.go:61] "storage-provisioner" [e1f494e4-cf90-42c5-b10b-93f3fff7bcc7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740955  265084 system_pods.go:74] duration metric: took 6.673189ms to wait for pod list to return data ...
	I0531 18:08:40.740965  265084 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:08:40.743326  265084 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:08:40.743349  265084 node_conditions.go:123] node cpu capacity is 8
	I0531 18:08:40.743360  265084 node_conditions.go:105] duration metric: took 2.389262ms to run NodePressure ...
	I0531 18:08:40.743379  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:40.862173  265084 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865722  265084 kubeadm.go:777] kubelet initialised
	I0531 18:08:40.865747  265084 kubeadm.go:778] duration metric: took 3.542091ms waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865755  265084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:08:40.870532  265084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	I0531 18:08:42.875463  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.204782  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:46.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.876615  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:47.375518  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:48.205463  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:50.705334  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:49.375999  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:51.875901  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1c61a8e4e6919       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   ff9f2b9e4b710
	2dd4c6e62c848       4c03754524064       12 minutes ago      Running             kube-proxy                0                   9882491c2eb7b
	bce895f043845       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   6b4c83a2bc23d
	93653e4eba8ad       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   b7f9cd5df9ca2
	8878d3b54661f       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   b8e30e5a630dd
	55beac89e1876       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   56c483605d4d5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:08:53 UTC. --
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.040617374Z" level=info msg="cleaning up dead shim"
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.049753604Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:02:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2428 runtime=io.containerd.runc.v2\n"
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.899314115Z" level=info msg="RemoveContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\""
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.904432367Z" level=info msg="RemoveContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\" returns successfully"
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.321435963Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.333738268Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.334215010Z" level=info msg="StartContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.404911516Z" level=info msg="StartContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\" returns successfully"
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631085147Z" level=info msg="shim disconnected" id=fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631181379Z" level=warning msg="cleaning up after shim disconnected" id=fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5 namespace=k8s.io
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631202230Z" level=info msg="cleaning up dead shim"
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.640116873Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:05:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2525 runtime=io.containerd.runc.v2\n"
	May 31 18:05:08 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:08.196347094Z" level=info msg="RemoveContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\""
	May 31 18:05:08 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:08.200755201Z" level=info msg="RemoveContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\" returns successfully"
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.320868293Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.333566580Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\""
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.334054335Z" level=info msg="StartContainer for \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\""
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.405209371Z" level=info msg="StartContainer for \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\" returns successfully"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645372277Z" level=info msg="shim disconnected" id=1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645435297Z" level=warning msg="cleaning up after shim disconnected" id=1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 namespace=k8s.io
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645450825Z" level=info msg="cleaning up dead shim"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.647336744Z" level=error msg="collecting metrics for 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028" error="ttrpc: closed: unknown"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.657009517Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:08:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2622 runtime=io.containerd.runc.v2\n"
	May 31 18:08:13 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:13.502319440Z" level=info msg="RemoveContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:08:13 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:13.506968593Z" level=info msg="RemoveContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531175604-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531175604-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531175604-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:56:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531175604-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:08:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220531175604-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                9377e8f5-ae2b-465c-b601-bd790903b8eb
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220531175604-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-jrlsl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220531175604-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220531175604-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nvktf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220531175604-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b] <==
	* {"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220531175604-6903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T18:06:31.835Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":550}
	{"level":"info","ts":"2022-05-31T18:06:31.836Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":550,"took":"576.229µs"}
	
	* 
	* ==> kernel <==
	*  18:08:54 up  1:51,  0 users,  load average: 0.49, 0.60, 1.10
	Linux embed-certs-20220531175604-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808] <==
	* I0531 17:56:33.502029       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:56:33.502098       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:56:33.502115       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:56:33.502122       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:56:33.502139       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:56:33.509858       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:56:34.382390       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:56:34.382418       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:56:34.386588       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:56:34.389618       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:56:34.389640       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:56:34.731605       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:56:34.758176       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:56:34.832687       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:56:34.837596       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 17:56:34.838344       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:56:34.841228       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:56:35.526888       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:56:36.183478       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:56:36.189560       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:56:36.198454       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:56:41.306447       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:56:49.027629       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:56:49.828447       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:56:50.264822       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad] <==
	* I0531 17:56:48.924684       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:56:48.924715       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:56:48.924965       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:56:48.926974       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:56:48.931979       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:56:48.976098       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:56:49.021334       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 17:56:49.033089       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jrlsl"
	I0531 17:56:49.034599       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nvktf"
	I0531 17:56:49.074109       1 shared_informer.go:247] Caches are synced for taint 
	I0531 17:56:49.074203       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 17:56:49.074226       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 17:56:49.074316       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220531175604-6903. Assuming now as a timestamp.
	I0531 17:56:49.074352       1 event.go:294] "Event occurred" object="embed-certs-20220531175604-6903" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220531175604-6903 event: Registered Node embed-certs-20220531175604-6903 in Controller"
	I0531 17:56:49.074384       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0531 17:56:49.133279       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.136437       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.552201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591219       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591238       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:56:49.830463       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:56:49.852290       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:56:49.928800       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z8m4h"
	I0531 17:56:49.932514       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-w2s2k"
	I0531 17:56:50.002323       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z8m4h"
	
	* 
	* ==> kube-proxy [2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843] <==
	* I0531 17:56:50.241573       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:56:50.241684       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:56:50.241784       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:56:50.262124       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:56:50.262154       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:56:50.262162       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:56:50.262174       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:56:50.262538       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:56:50.263031       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:56:50.263090       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:56:50.263054       1 config.go:317] "Starting service config controller"
	I0531 17:56:50.263166       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:56:50.363889       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:56:50.363890       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e] <==
	* E0531 17:56:33.431718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:33.431722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:56:33.431754       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:56:33.431760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:56:33.431778       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.431785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.502370       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:33.503086       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:56:33.503381       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.503415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.503867       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:56:33.503957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:56:34.288126       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:56:34.288171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:56:34.331364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:34.331388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:56:34.365507       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:56:34.365532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:56:34.370569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:34.370601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:34.434332       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:56:34.434358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:56:34.741075       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:34.741113       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 17:56:36.429128       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:08:54 UTC. --
	May 31 18:07:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:36.626946    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:41.630089    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:46.631528    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:51.632736    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:56 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:56.633951    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:01 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:01.635594    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:06 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:06.636587    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:11 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:11.637447    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:13.501189    1306 scope.go:110] "RemoveContainer" containerID="fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:13.501522    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:13.501802    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:16 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:16.638219    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:21 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:21.638993    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:26 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:26.639944    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:27 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:27.319469    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:27 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:27.319777    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:31 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:31.641438    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:36.642589    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:38 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:38.319271    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:38 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:38.319524    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:41.644208    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:46.645364    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:51.319642    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:51.319998    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:51.646167    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner: exit status 1 (55.298107ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hphtm (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hphtm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-w2s2k" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531175604-6903
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531175604-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f",
	        "Created": "2022-05-31T17:56:17.948185818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244636,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:56:18.300730024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f-json.log",
	        "Name": "/embed-certs-20220531175604-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531175604-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531175604-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531175604-6903",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531175604-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531175604-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11a6b63b5abe8f9c9428988cf4db6f03035277ca15e61a9acec7f8823d618698",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/11a6b63b5abe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531175604-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8a0a6250b5",
	                        "embed-certs-20220531175604-6903"
	                    ],
	                    "NetworkID": "810e286ea2469d855f00ec56445da0705b1ca1a44b439a6e099264f06730a27d",
	                    "EndpointID": "d7bf905c93b04663aeaeb7c5b125cdceaf3e7b5b400379603ca717422c8036ad",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220531175604-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:08:08
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:08:08.309660  265084 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:08:08.309791  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309803  265084 out.go:309] Setting ErrFile to fd 2...
	I0531 18:08:08.309815  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309926  265084 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:08:08.310162  265084 out.go:303] Setting JSON to false
	I0531 18:08:08.311302  265084 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6639,"bootTime":1654013849,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:08:08.311358  265084 start.go:125] virtualization: kvm guest
	I0531 18:08:08.313832  265084 out.go:177] * [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:08:08.315362  265084 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:08:08.315369  265084 notify.go:193] Checking for updates...
	I0531 18:08:08.316763  265084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:08:08.318244  265084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:08.319779  265084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:08:08.321340  265084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:08:08.323191  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:08.323603  265084 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:08:08.362346  265084 docker.go:137] docker version: linux-20.10.16
	I0531 18:08:08.362439  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.463602  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.390259074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.463698  265084 docker.go:254] overlay module found
	I0531 18:08:08.465780  265084 out.go:177] * Using the docker driver based on existing profile
	I0531 18:08:08.467039  265084 start.go:284] selected driver: docker
	I0531 18:08:08.467049  265084 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.467161  265084 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:08:08.468025  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.564858  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.495990048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.565102  265084 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:08:08.565123  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:08.565130  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:08.565142  265084 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.567536  265084 out.go:177] * Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	I0531 18:08:08.568944  265084 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:08:08.570365  265084 out.go:177] * Pulling base image ...
	I0531 18:08:08.571649  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:08.571672  265084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:08:08.571689  265084 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:08:08.571699  265084 cache.go:57] Caching tarball of preloaded images
	I0531 18:08:08.571897  265084 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:08:08.571914  265084 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:08:08.572029  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:08.619058  265084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:08:08.619084  265084 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:08:08.619096  265084 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:08:08.619126  265084 start.go:352] acquiring machines lock for default-k8s-different-port-20220531175509-6903: {Name:mk53f02aa9701786e51ee0c8a5d73dcf46801d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:08:08.619250  265084 start.go:356] acquired machines lock for "default-k8s-different-port-20220531175509-6903" in 60.577µs
	I0531 18:08:08.619274  265084 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:08:08.619282  265084 fix.go:55] fixHost starting: 
	I0531 18:08:08.619518  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:08.649852  265084 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531175509-6903: state=Stopped err=<nil>
	W0531 18:08:08.649892  265084 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:08:08.651929  265084 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531175509-6903" ...
	I0531 18:08:08.706025  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:11.205176  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:08.653246  265084 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.036886  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:09.070303  265084 kic.go:416] container "default-k8s-different-port-20220531175509-6903" state is running.
	I0531 18:08:09.070670  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.103605  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:09.103829  265084 machine.go:88] provisioning docker machine ...
	I0531 18:08:09.103858  265084 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531175509-6903"
	I0531 18:08:09.103909  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.134428  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:09.134578  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:09.134603  265084 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531175509-6903 && echo "default-k8s-different-port-20220531175509-6903" | sudo tee /etc/hostname
	I0531 18:08:09.135241  265084 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37084->127.0.0.1:49437: read: connection reset by peer
	I0531 18:08:12.259673  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531175509-6903
	
	I0531 18:08:12.259750  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.291506  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:12.291664  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:12.291697  265084 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531175509-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531175509-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531175509-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:08:12.398559  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:08:12.398585  265084 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:08:12.398600  265084 ubuntu.go:177] setting up certificates
	I0531 18:08:12.398609  265084 provision.go:83] configureAuth start
	I0531 18:08:12.398666  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.431013  265084 provision.go:138] copyHostCerts
	I0531 18:08:12.431073  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:08:12.431088  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:08:12.431178  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:08:12.431291  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:08:12.431308  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:08:12.431354  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:08:12.431426  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:08:12.431439  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:08:12.431471  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:08:12.431572  265084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531175509-6903 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531175509-6903]
	I0531 18:08:12.598055  265084 provision.go:172] copyRemoteCerts
	I0531 18:08:12.598106  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:08:12.598136  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.631111  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.714018  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 18:08:12.731288  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:08:12.747333  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:08:12.763254  265084 provision.go:86] duration metric: configureAuth took 364.63384ms
	I0531 18:08:12.763282  265084 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:08:12.763474  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:12.763490  265084 machine.go:91] provisioned docker machine in 3.659644302s
	I0531 18:08:12.763497  265084 start.go:306] post-start starting for "default-k8s-different-port-20220531175509-6903" (driver="docker")
	I0531 18:08:12.763505  265084 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:08:12.763543  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:08:12.763579  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.795235  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.873714  265084 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:08:12.876227  265084 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:08:12.876248  265084 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:08:12.876257  265084 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:08:12.876262  265084 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:08:12.876270  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:08:12.876309  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:08:12.876369  265084 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:08:12.876457  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:08:12.882555  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:12.898406  265084 start.go:309] post-start completed in 134.899493ms
	I0531 18:08:12.898470  265084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:08:12.898502  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.929840  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.011134  265084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:08:13.014770  265084 fix.go:57] fixHost completed within 4.39548261s
	I0531 18:08:13.014795  265084 start.go:81] releasing machines lock for "default-k8s-different-port-20220531175509-6903", held for 4.3955315s
	I0531 18:08:13.014869  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046127  265084 ssh_runner.go:195] Run: systemctl --version
	I0531 18:08:13.046172  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046174  265084 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:08:13.046264  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.079038  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.079600  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.163089  265084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:08:13.184388  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:08:13.192965  265084 docker.go:187] disabling docker service ...
	I0531 18:08:13.193006  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:08:13.201843  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:08:13.209984  265084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:08:13.281373  265084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:08:13.705063  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:16.205156  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:13.351161  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:08:13.359679  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:08:13.371415  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:08:13.383601  265084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:08:13.389381  265084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:08:13.395293  265084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:08:13.467306  265084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:08:13.544767  265084 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:08:13.544838  265084 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:08:13.549043  265084 start.go:468] Will wait 60s for crictl version
	I0531 18:08:13.549097  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:13.581186  265084 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:08:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:08:18.704597  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:21.205707  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:24.627975  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:24.650848  265084 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:08:24.650905  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.677319  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.704802  265084 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:08:24.706277  265084 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:08:24.735854  265084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0531 18:08:24.738892  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.749702  265084 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:08:23.704978  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:25.705116  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:24.751112  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:24.751189  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.773113  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.773129  265084 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:08:24.773160  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.794357  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.794373  265084 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:08:24.794406  265084 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:08:24.815845  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:24.815862  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:24.815876  265084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:08:24.815892  265084 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531175509-6903 NodeName:default-k8s-different-port-20220531175509-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:08:24.816032  265084 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220531175509-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:08:24.816118  265084 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220531175509-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 18:08:24.816165  265084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:08:24.822458  265084 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:08:24.822505  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:08:24.828560  265084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0531 18:08:24.840392  265084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:08:24.851809  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0531 18:08:24.863569  265084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:08:24.866080  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.875701  265084 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903 for IP: 192.168.76.2
	I0531 18:08:24.875793  265084 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:08:24.875829  265084 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:08:24.875892  265084 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key
	I0531 18:08:24.875942  265084 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25
	I0531 18:08:24.875977  265084 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key
	I0531 18:08:24.876064  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:08:24.876092  265084 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:08:24.876104  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:08:24.876131  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:08:24.876152  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:08:24.876182  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:08:24.876220  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:24.876773  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:08:24.892395  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:08:24.907892  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:08:24.923592  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:08:24.939375  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:08:24.954761  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:08:24.970309  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:08:24.985770  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:08:25.002079  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:08:25.017430  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:08:25.032835  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:08:25.048607  265084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:08:25.059993  265084 ssh_runner.go:195] Run: openssl version
	I0531 18:08:25.064220  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:08:25.070801  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073567  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073614  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.077984  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:08:25.084035  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:08:25.090628  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093313  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093361  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.097725  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:08:25.103766  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:08:25.110369  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113149  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113180  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.117580  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:08:25.123696  265084 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:25.123784  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:08:25.123821  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:25.146591  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:25.146619  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:25.146630  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:25.146638  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:25.146644  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:25.146653  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:25.146661  265084 cri.go:87] found id: ""
	I0531 18:08:25.146697  265084 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:08:25.157547  265084 cri.go:114] JSON = null
	W0531 18:08:25.157585  265084 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:08:25.157630  265084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:08:25.163950  265084 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:08:25.163970  265084 kubeadm.go:626] restartCluster start
	I0531 18:08:25.163999  265084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:08:25.169734  265084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.170732  265084 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531175509-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:25.171454  265084 kubeconfig.go:127] "default-k8s-different-port-20220531175509-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:08:25.172470  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:08:25.173986  265084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:08:25.179942  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.179983  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.186965  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.387200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.387275  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.395846  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.587039  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.587105  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.595520  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.787853  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.787919  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.796087  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.987443  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.987592  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.995763  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.188042  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.188119  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.196440  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.387758  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.387821  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.395923  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.587200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.587257  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.595464  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.787757  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.787847  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.796141  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.987434  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.987519  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.995749  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.188036  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.188093  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.196930  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.387163  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.387241  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.395603  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.587873  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.587940  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.596151  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.787448  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.787529  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.795830  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.987084  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.987165  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.995664  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.187945  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.188030  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.196347  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.196370  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.196404  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.204102  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.204125  265084 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:08:28.204132  265084 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:08:28.204145  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:08:28.204198  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:28.227632  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:28.227660  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:28.227671  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:28.227679  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:28.227685  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:28.227691  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:28.227700  265084 cri.go:87] found id: ""
	I0531 18:08:28.227705  265084 cri.go:232] Stopping containers: [52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f]
	I0531 18:08:28.227754  265084 ssh_runner.go:195] Run: which crictl
	I0531 18:08:28.230377  265084 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f
	I0531 18:08:28.253379  265084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:08:28.263239  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:08:28.269611  265084 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 17:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:55 /etc/kubernetes/scheduler.conf
	
	I0531 18:08:28.269655  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 18:08:28.276169  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 18:08:28.282320  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.288727  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.288764  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.294577  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 18:08:28.300576  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.300611  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:08:28.306535  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:27.705434  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:30.205110  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:28.313163  265084 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:28.313181  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.354378  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.801587  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.930245  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.977387  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:29.027665  265084 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:08:29.027728  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:29.536233  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.036182  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.536067  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.035853  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.535756  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.036379  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.536341  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:33.036406  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.704857  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:34.705759  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:33.536689  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.036411  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.536112  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.036299  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.111746  265084 api_server.go:71] duration metric: took 6.084083791s to wait for apiserver process to appear ...
	I0531 18:08:35.111779  265084 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:08:35.111789  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:35.112142  265084 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0531 18:08:35.612870  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.446439  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:08:38.446468  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:08:38.612744  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.618441  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:38.618510  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.113066  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.117198  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.117223  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.612302  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.616945  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.616967  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:40.112481  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:40.117156  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0531 18:08:40.122751  265084 api_server.go:140] control plane version: v1.23.6
	I0531 18:08:40.122771  265084 api_server.go:130] duration metric: took 5.010986211s to wait for apiserver health ...
	I0531 18:08:40.122780  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:40.122788  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:40.124631  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:08:37.206138  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:39.207436  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:41.704753  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:40.125848  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:08:40.129376  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:08:40.129394  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:08:40.142078  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:08:40.734264  265084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:08:40.740842  265084 system_pods.go:59] 9 kube-system pods found
	I0531 18:08:40.740873  265084 system_pods.go:61] "coredns-64897985d-92zgx" [b91e17cd-2735-4a67-a78b-9f06d1ea411e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740885  265084 system_pods.go:61] "etcd-default-k8s-different-port-20220531175509-6903" [13ef129d-4fca-4990-84b0-03bfdcfabf1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:08:40.740894  265084 system_pods.go:61] "kindnet-vdbp9" [79d3fb6a-0f34-4e42-809a-d4b9107ab071] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:08:40.740901  265084 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531175509-6903" [a547e888-b760-4d90-8f4c-50685def1dd3] Running
	I0531 18:08:40.740916  265084 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531175509-6903" [b23304a6-b5b1-4237-bfbb-6029f2c79380] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:08:40.740923  265084 system_pods.go:61] "kube-proxy-ff6gx" [4d094300-69cc-429e-8b17-52f2ddb8b9c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:08:40.740933  265084 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531175509-6903" [c7f2ccba-dc09-41b5-815a-1d7e16814c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:08:40.740942  265084 system_pods.go:61] "metrics-server-b955d9d8-wvb9t" [f87f1c60-e753-4d02-8ae1-914a03b2b27a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740951  265084 system_pods.go:61] "storage-provisioner" [e1f494e4-cf90-42c5-b10b-93f3fff7bcc7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740955  265084 system_pods.go:74] duration metric: took 6.673189ms to wait for pod list to return data ...
	I0531 18:08:40.740965  265084 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:08:40.743326  265084 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:08:40.743349  265084 node_conditions.go:123] node cpu capacity is 8
	I0531 18:08:40.743360  265084 node_conditions.go:105] duration metric: took 2.389262ms to run NodePressure ...
	I0531 18:08:40.743379  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:40.862173  265084 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865722  265084 kubeadm.go:777] kubelet initialised
	I0531 18:08:40.865747  265084 kubeadm.go:778] duration metric: took 3.542091ms waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865755  265084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:08:40.870532  265084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	I0531 18:08:42.875463  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.204782  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:46.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.876615  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:47.375518  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:48.205463  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:50.705334  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:49.375999  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:51.875901  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1c61a8e4e6919       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   ff9f2b9e4b710
	2dd4c6e62c848       4c03754524064       12 minutes ago      Running             kube-proxy                0                   9882491c2eb7b
	bce895f043845       8fa62c12256df       12 minutes ago      Running             kube-apiserver            0                   6b4c83a2bc23d
	93653e4eba8ad       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   b7f9cd5df9ca2
	8878d3b54661f       595f327f224a4       12 minutes ago      Running             kube-scheduler            0                   b8e30e5a630dd
	55beac89e1876       df7b72818ad2e       12 minutes ago      Running             kube-controller-manager   0                   56c483605d4d5
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:08:55 UTC. --
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.040617374Z" level=info msg="cleaning up dead shim"
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.049753604Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:02:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2428 runtime=io.containerd.runc.v2\n"
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.899314115Z" level=info msg="RemoveContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\""
	May 31 18:02:14 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:14.904432367Z" level=info msg="RemoveContainer for \"2288455dcf90c8994ebfbe16f7e5d316ee0ff933996a8b32d9f9c91a33ca4bb6\" returns successfully"
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.321435963Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.333738268Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.334215010Z" level=info msg="StartContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:02:27 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:02:27.404911516Z" level=info msg="StartContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\" returns successfully"
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631085147Z" level=info msg="shim disconnected" id=fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631181379Z" level=warning msg="cleaning up after shim disconnected" id=fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5 namespace=k8s.io
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.631202230Z" level=info msg="cleaning up dead shim"
	May 31 18:05:07 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:07.640116873Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:05:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2525 runtime=io.containerd.runc.v2\n"
	May 31 18:05:08 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:08.196347094Z" level=info msg="RemoveContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\""
	May 31 18:05:08 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:08.200755201Z" level=info msg="RemoveContainer for \"cd37d7e6c7fad1ef674cc2879250f6da83924608590f4597ae0ed06d537d8e48\" returns successfully"
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.320868293Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.333566580Z" level=info msg="CreateContainer within sandbox \"ff9f2b9e4b7106fd79a5145e2faa63f749ba617a2e3327e1b9b370b0a32c2fd7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\""
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.334054335Z" level=info msg="StartContainer for \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\""
	May 31 18:05:32 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:05:32.405209371Z" level=info msg="StartContainer for \"1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028\" returns successfully"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645372277Z" level=info msg="shim disconnected" id=1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645435297Z" level=warning msg="cleaning up after shim disconnected" id=1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 namespace=k8s.io
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.645450825Z" level=info msg="cleaning up dead shim"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.647336744Z" level=error msg="collecting metrics for 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028" error="ttrpc: closed: unknown"
	May 31 18:08:12 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:12.657009517Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:08:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2622 runtime=io.containerd.runc.v2\n"
	May 31 18:08:13 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:13.502319440Z" level=info msg="RemoveContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\""
	May 31 18:08:13 embed-certs-20220531175604-6903 containerd[503]: time="2022-05-31T18:08:13.506968593Z" level=info msg="RemoveContainer for \"fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531175604-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531175604-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531175604-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_56_37_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:56:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531175604-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:08:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:07:04 +0000   Tue, 31 May 2022 17:56:31 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220531175604-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                9377e8f5-ae2b-465c-b601-bd790903b8eb
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220531175604-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-jrlsl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220531175604-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220531175604-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nvktf                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220531175604-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b] <==
	* {"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:56:30.931Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220531175604-6903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.822Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.823Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:56:31.824Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T18:06:31.835Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":550}
	{"level":"info","ts":"2022-05-31T18:06:31.836Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":550,"took":"576.229µs"}
	
	* 
	* ==> kernel <==
	*  18:08:55 up  1:51,  0 users,  load average: 1.01, 0.71, 1.13
	Linux embed-certs-20220531175604-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808] <==
	* I0531 17:56:33.502029       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:56:33.502098       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:56:33.502115       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:56:33.502122       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 17:56:33.502139       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:56:33.509858       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 17:56:34.382390       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:56:34.382418       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:56:34.386588       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:56:34.389618       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:56:34.389640       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:56:34.731605       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:56:34.758176       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:56:34.832687       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:56:34.837596       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 17:56:34.838344       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:56:34.841228       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:56:35.526888       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:56:36.183478       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:56:36.189560       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:56:36.198454       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:56:41.306447       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 17:56:49.027629       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 17:56:49.828447       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 17:56:50.264822       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad] <==
	* I0531 17:56:48.924684       1 shared_informer.go:247] Caches are synced for endpoint 
	I0531 17:56:48.924715       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 17:56:48.924965       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0531 17:56:48.926974       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 17:56:48.931979       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:56:48.976098       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0531 17:56:49.021334       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 17:56:49.033089       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jrlsl"
	I0531 17:56:49.034599       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nvktf"
	I0531 17:56:49.074109       1 shared_informer.go:247] Caches are synced for taint 
	I0531 17:56:49.074203       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 17:56:49.074226       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 17:56:49.074316       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220531175604-6903. Assuming now as a timestamp.
	I0531 17:56:49.074352       1 event.go:294] "Event occurred" object="embed-certs-20220531175604-6903" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220531175604-6903 event: Registered Node embed-certs-20220531175604-6903 in Controller"
	I0531 17:56:49.074384       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0531 17:56:49.133279       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.136437       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:56:49.552201       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591219       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:56:49.591238       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:56:49.830463       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:56:49.852290       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:56:49.928800       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-z8m4h"
	I0531 17:56:49.932514       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-w2s2k"
	I0531 17:56:50.002323       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-z8m4h"
	
	* 
	* ==> kube-proxy [2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843] <==
	* I0531 17:56:50.241573       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:56:50.241684       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:56:50.241784       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:56:50.262124       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:56:50.262154       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:56:50.262162       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:56:50.262174       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:56:50.262538       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:56:50.263031       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:56:50.263090       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:56:50.263054       1 config.go:317] "Starting service config controller"
	I0531 17:56:50.263166       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:56:50.363889       1 shared_informer.go:247] Caches are synced for service config 
	I0531 17:56:50.363890       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e] <==
	* E0531 17:56:33.431718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:33.431722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:56:33.431754       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:56:33.431760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:56:33.431778       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.431785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.502370       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:33.503086       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 17:56:33.503381       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:33.503415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:33.503867       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:56:33.503957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 17:56:34.288126       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:56:34.288171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:56:34.331364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:56:34.331388       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:56:34.365507       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:56:34.365532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:56:34.370569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:56:34.370601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:56:34.434332       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:56:34.434358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:56:34.741075       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:56:34.741113       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 17:56:36.429128       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:56:18 UTC, end at Tue 2022-05-31 18:08:56 UTC. --
	May 31 18:07:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:36.626946    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:41.630089    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:46.631528    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:51.632736    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:07:56 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:07:56.633951    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:01 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:01.635594    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:06 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:06.636587    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:11 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:11.637447    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:13.501189    1306 scope.go:110] "RemoveContainer" containerID="fa154a39cdb76432675a6243f3cf268e08760c3ee50826c5f3f681d0615635a5"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:13.501522    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:13 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:13.501802    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:16 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:16.638219    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:21 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:21.638993    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:26 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:26.639944    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:27 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:27.319469    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:27 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:27.319777    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:31 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:31.641438    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:36 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:36.642589    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:38 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:38.319271    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:38 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:38.319524    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:41 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:41.644208    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:46 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:46.645364    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: I0531 18:08:51.319642    1306 scope.go:110] "RemoveContainer" containerID="1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:51.319998    1306 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-jrlsl_kube-system(c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c)\"" pod="kube-system/kindnet-jrlsl" podUID=c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c
	May 31 18:08:51 embed-certs-20220531175604-6903 kubelet[1306]: E0531 18:08:51.646167    1306 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner: exit status 1 (56.219187ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hphtm (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hphtm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-w2s2k" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531175604-6903 describe pod busybox coredns-64897985d-w2s2k storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (484.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220531175602-6903 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20220531175602-6903 --alsologtostderr -v=1: exit status 80 (2.241664421s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20220531175602-6903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:01:06.352808  256890 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:01:06.352968  256890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:01:06.352978  256890 out.go:309] Setting ErrFile to fd 2...
	I0531 18:01:06.352982  256890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:01:06.353077  256890 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:01:06.353215  256890 out.go:303] Setting JSON to false
	I0531 18:01:06.353235  256890 mustload.go:65] Loading cluster: newest-cni-20220531175602-6903
	I0531 18:01:06.353553  256890 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:01:06.353906  256890 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:06.386372  256890 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:06.386626  256890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:01:06.484147  256890 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:54 SystemTime:2022-05-31 18:01:06.415599954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:01:06.484574  256890 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0531 18:01:06.486828  256890 out.go:177] * Pausing node newest-cni-20220531175602-6903 ... 
	I0531 18:01:06.488143  256890 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:06.488358  256890 ssh_runner.go:195] Run: systemctl --version
	I0531 18:01:06.488391  256890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:06.519483  256890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:06.598874  256890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:06.608175  256890 pause.go:50] kubelet running: true
	I0531 18:01:06.608222  256890 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 18:01:06.716110  256890 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 18:01:06.992537  256890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:07.002263  256890 pause.go:50] kubelet running: true
	I0531 18:01:07.002321  256890 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 18:01:07.103719  256890 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 18:01:07.644421  256890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:07.654007  256890 pause.go:50] kubelet running: true
	I0531 18:01:07.654088  256890 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 18:01:07.759341  256890 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 18:01:08.414864  256890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:08.424510  256890 pause.go:50] kubelet running: true
	I0531 18:01:08.424572  256890 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 18:01:08.529300  256890 out.go:177] 
	W0531 18:01:08.530755  256890 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0531 18:01:08.530775  256890 out.go:239] * 
	* 
	W0531 18:01:08.532808  256890 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:01:08.534155  256890 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p newest-cni-20220531175602-6903 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220531175602-6903
helpers_test.go:235: (dbg) docker inspect newest-cni-20220531175602-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71",
	        "Created": "2022-05-31T17:59:33.649637794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:00:32.570188094Z",
	            "FinishedAt": "2022-05-31T18:00:31.334161439Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/hostname",
	        "HostsPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/hosts",
	        "LogPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71-json.log",
	        "Name": "/newest-cni-20220531175602-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220531175602-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220531175602-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220531175602-6903",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220531175602-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220531175602-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220531175602-6903",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220531175602-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72d03a9307c3cc13ada83f1b0caab90d8bcec4f331c358a3e12a4a2308c24a6a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72d03a9307c3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220531175602-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de51a3963e61",
	                        "newest-cni-20220531175602-6903"
	                    ],
	                    "NetworkID": "8293cc9ba146f6498f2356f2bf1d8638ecf22835b98f3215a084a1bee9850a46",
	                    "EndpointID": "cd5311c17f42b411903f14e0feb1b3d1001a1339a5f91c6d4358fd443f0907f8",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20220531175602-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	|         | pgrep -a kubelet                                           |                                                |         |                |                     |                     |
	| start   | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:49 UTC |
	|         | --memory=2048                                              |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                               |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	| ssh     | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |                |                     |                     |
	| logs    | calico-20220531174030-6903                                 | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220531174030-6903                              | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20220531175323-6903      | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |         |                |                     |                     |
	|         | --keep-context=false                                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:00:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:00:31.855034  253603 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:00:31.855128  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855137  253603 out.go:309] Setting ErrFile to fd 2...
	I0531 18:00:31.855169  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855275  253603 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:00:31.855500  253603 out.go:303] Setting JSON to false
	I0531 18:00:31.857002  253603 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6183,"bootTime":1654013849,"procs":755,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:00:31.857065  253603 start.go:125] virtualization: kvm guest
	I0531 18:00:31.859650  253603 out.go:177] * [newest-cni-20220531175602-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:00:31.861106  253603 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:00:31.861145  253603 notify.go:193] Checking for updates...
	I0531 18:00:31.863620  253603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:00:31.865010  253603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:31.866391  253603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:00:31.867875  253603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:00:31.871501  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:31.872091  253603 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:00:31.913476  253603 docker.go:137] docker version: linux-20.10.16
	I0531 18:00:31.913607  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.012796  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:31.941581138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.012892  253603 docker.go:254] overlay module found
	I0531 18:00:32.015694  253603 out.go:177] * Using the docker driver based on existing profile
	I0531 18:00:32.016948  253603 start.go:284] selected driver: docker
	I0531 18:00:32.016961  253603 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.017071  253603 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:00:32.017980  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.118816  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:32.047560918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.119131  253603 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 18:00:32.119167  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:32.119175  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:32.119195  253603 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119208  253603 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119215  253603 start_flags.go:306] config:
	{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.122424  253603 out.go:177] * Starting control plane node newest-cni-20220531175602-6903 in cluster newest-cni-20220531175602-6903
	I0531 18:00:32.123755  253603 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:00:32.125291  253603 out.go:177] * Pulling base image ...
	I0531 18:00:32.126765  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:32.126808  253603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:00:32.126822  253603 cache.go:57] Caching tarball of preloaded images
	I0531 18:00:32.126856  253603 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:00:32.127020  253603 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:00:32.127034  253603 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:00:32.127170  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.176155  253603 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:00:32.176180  253603 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:00:32.176199  253603 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:00:32.176233  253603 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:00:32.176322  253603 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 69.182µs
	I0531 18:00:32.176340  253603 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:00:32.176344  253603 fix.go:55] fixHost starting: 
	I0531 18:00:32.176560  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.209761  253603 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state=Stopped err=<nil>
	W0531 18:00:32.209791  253603 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:00:32.212875  253603 out.go:177] * Restarting existing docker container for "newest-cni-20220531175602-6903" ...
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.214225  253603 cli_runner.go:164] Run: docker start newest-cni-20220531175602-6903
	I0531 18:00:32.577327  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.610657  253603 kic.go:416] container "newest-cni-20220531175602-6903" state is running.
	I0531 18:00:32.611011  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:32.643675  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.643905  253603 machine.go:88] provisioning docker machine ...
	I0531 18:00:32.643932  253603 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 18:00:32.643983  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:32.674555  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:32.674809  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:32.674837  253603 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 18:00:32.675642  253603 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46432->127.0.0.1:49427: read: connection reset by peer
	I0531 18:00:35.795562  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 18:00:35.795625  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:35.826982  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:35.827166  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:35.827189  253603 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:00:35.938582  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:00:35.938614  253603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:00:35.938689  253603 ubuntu.go:177] setting up certificates
	I0531 18:00:35.938700  253603 provision.go:83] configureAuth start
	I0531 18:00:35.938739  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:35.970778  253603 provision.go:138] copyHostCerts
	I0531 18:00:35.970836  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:00:35.970855  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:00:35.970915  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:00:35.971070  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:00:35.971088  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:00:35.971129  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:00:35.971236  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:00:35.971254  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:00:35.971287  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:00:35.971355  253603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 18:00:36.142238  253603 provision.go:172] copyRemoteCerts
	I0531 18:00:36.142291  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:00:36.142320  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.173472  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.254066  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:00:36.271055  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:00:36.287105  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:00:36.302927  253603 provision.go:86] duration metric: configureAuth took 364.217481ms
	I0531 18:00:36.302948  253603 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:00:36.303122  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:36.303134  253603 machine.go:91] provisioned docker machine in 3.659215237s
	I0531 18:00:36.303168  253603 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 18:00:36.303175  253603 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:00:36.303216  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:00:36.303261  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.335634  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.418002  253603 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:00:36.420669  253603 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:00:36.420693  253603 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:00:36.420701  253603 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:00:36.420706  253603 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:00:36.420719  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:00:36.420765  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:00:36.420825  253603 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:00:36.420897  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:00:36.427208  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:36.443819  253603 start.go:309] post-start completed in 140.639246ms
	I0531 18:00:36.443888  253603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:00:36.443930  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.477971  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.555314  253603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:00:36.559129  253603 fix.go:57] fixHost completed within 4.38277864s
	I0531 18:00:36.559171  253603 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 4.382836668s
	I0531 18:00:36.559246  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:36.590986  253603 ssh_runner.go:195] Run: systemctl --version
	I0531 18:00:36.591023  253603 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:00:36.591084  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.591027  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.624550  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.625023  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.722476  253603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:00:36.732794  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:00:36.741236  253603 docker.go:187] disabling docker service ...
	I0531 18:00:36.741281  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:00:36.757377  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:00:36.765762  253603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:00:36.850081  253603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.930380  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:00:36.938984  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:00:36.951805  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:00:36.964223  253603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:00:36.970217  253603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:00:36.976123  253603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:00:37.050759  253603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:00:37.133255  253603 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:00:37.133326  253603 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:00:37.136650  253603 start.go:468] Will wait 60s for crictl version
	I0531 18:00:37.136705  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:37.162540  253603 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:00:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.209660  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:48.232631  253603 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:00:48.232687  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.260476  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.288516  253603 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:00:48.289983  253603 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:00:48.321110  253603 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 18:00:48.324362  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.335260  253603 out.go:177]   - kubelet.network-plugin=cni
	I0531 18:00:48.336944  253603 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 18:00:48.338457  253603 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 
	I0531 18:00:48.339824  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:48.339884  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.363681  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.363700  253603 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:00:48.363745  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.385839  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.385856  253603 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:00:48.385893  253603 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:00:48.408057  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:48.408077  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:48.408091  253603 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 18:00:48.408103  253603 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531175602-6903 NodeName:newest-cni-20220531175602-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:00:48.408230  253603 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220531175602-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:00:48.408307  253603 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531175602-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:00:48.408350  253603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:00:48.414874  253603 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:00:48.414928  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:00:48.421138  253603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0531 18:00:48.433792  253603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:00:48.447663  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0531 18:00:48.459853  253603 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:00:48.462496  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.470850  253603 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903 for IP: 192.168.58.2
	I0531 18:00:48.470935  253603 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:00:48.470970  253603 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:00:48.471030  253603 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key
	I0531 18:00:48.471080  253603 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041
	I0531 18:00:48.471114  253603 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key
	I0531 18:00:48.471247  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:00:48.471280  253603 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:00:48.471292  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:00:48.471322  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:00:48.471348  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:00:48.471369  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:00:48.471406  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:48.471990  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:00:48.487996  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:00:48.504050  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:00:48.520129  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:00:48.536197  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:00:48.551773  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:00:48.567698  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:00:48.583534  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:00:48.599284  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:00:48.615488  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:00:48.631736  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:00:48.648044  253603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:00:48.659819  253603 ssh_runner.go:195] Run: openssl version
	I0531 18:00:48.664514  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:00:48.671684  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674554  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674592  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.678953  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:00:48.685183  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:00:48.691850  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694734  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694775  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.699108  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:00:48.705843  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:00:48.713797  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716588  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716628  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.720988  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:00:48.727223  253603 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:48.727350  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:00:48.727391  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:48.751975  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:48.751998  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:48.752009  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:48.752025  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:48.752038  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:48.752051  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:48.752060  253603 cri.go:87] found id: ""
	I0531 18:00:48.752094  253603 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:00:48.763086  253603 cri.go:114] JSON = null
	W0531 18:00:48.763128  253603 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:00:48.763217  253603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:00:48.769482  253603 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:00:48.769502  253603 kubeadm.go:626] restartCluster start
	I0531 18:00:48.769537  253603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:00:48.775590  253603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.776475  253603 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531175602-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:48.777108  253603 kubeconfig.go:127] "newest-cni-20220531175602-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:00:48.777968  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:00:48.779498  253603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:00:48.785488  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.785519  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:48.793052  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.993429  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.993482  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.001612  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.193914  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.193974  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.202307  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.393581  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.393647  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.401876  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.594165  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.594228  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.602448  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.793873  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.793934  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.802272  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.993549  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.993606  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.002105  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.193422  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.193478  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.201805  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.394099  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.394197  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.402406  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.593662  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.593737  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.602754  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.794037  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.794083  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.803034  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.993253  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.993322  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.002295  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.193608  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.193667  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.201663  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.393968  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.394033  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.402169  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.593519  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.593576  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.602288  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.793534  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.793598  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.803943  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.803964  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.803995  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.812522  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.812554  253603 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:00:51.812560  253603 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:00:51.812574  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:00:51.812615  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:51.839954  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:51.839976  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:51.839982  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:51.839989  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:51.839994  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:51.840001  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:51.840013  253603 cri.go:87] found id: ""
	I0531 18:00:51.840018  253603 cri.go:232] Stopping containers: [776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b]
	I0531 18:00:51.840059  253603 ssh_runner.go:195] Run: which crictl
	I0531 18:00:51.842973  253603 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b
	I0531 18:00:51.869603  253603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:00:51.880644  253603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:00:51.887664  253603 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:59 /etc/kubernetes/scheduler.conf
	
	I0531 18:00:51.887720  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:00:51.894538  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:00:51.901534  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.908371  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.908424  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.917592  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:00:51.925101  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.925151  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:00:51.931258  253603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937908  253603 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937925  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:51.981409  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.730818  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.866579  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.918070  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.960507  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:00:52.960554  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.469301  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.969201  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.469096  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.968777  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.468873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.968873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.468973  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.969026  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.468917  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.968887  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.469411  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.969742  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:59.011037  253603 api_server.go:71] duration metric: took 6.050532367s to wait for apiserver process to appear ...
	I0531 18:00:59.011067  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:00:59.011079  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:00:59.011494  253603 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0531 18:00:59.512207  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.105106  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:01:02.105133  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:01:02.512478  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.516889  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:02.516910  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.012313  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.016705  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:03.016731  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.512288  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.516555  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:03.522009  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:03.522027  253603 api_server.go:130] duration metric: took 4.510954896s to wait for apiserver health ...
	I0531 18:01:03.522036  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:01:03.522043  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:01:03.524134  253603 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:01:03.525439  253603 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:01:03.529095  253603 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:01:03.529112  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:01:03.541449  253603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:01:04.388379  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.394833  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.394868  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394878  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.394887  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.394895  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.394908  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.394914  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.394927  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.394933  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394938  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394945  253603 system_pods.go:74] duration metric: took 6.541942ms to wait for pod list to return data ...
	I0531 18:01:04.394952  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.397297  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.397318  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.397328  253603 node_conditions.go:105] duration metric: took 2.369222ms to run NodePressure ...
	I0531 18:01:04.397343  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:01:04.522242  253603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:01:04.528860  253603 ops.go:34] apiserver oom_adj: -16
	I0531 18:01:04.528888  253603 kubeadm.go:630] restartCluster took 15.759378612s
	I0531 18:01:04.528897  253603 kubeadm.go:397] StartCluster complete in 15.801681788s
	I0531 18:01:04.528917  253603 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.529033  253603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:01:04.530679  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.533767  253603 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531175602-6903" rescaled to 1
	I0531 18:01:04.533818  253603 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:01:04.533838  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:01:04.536326  253603 out.go:177] * Verifying Kubernetes components...
	I0531 18:01:04.533856  253603 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 18:01:04.534015  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:01:04.537649  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:04.537683  253603 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537700  253603 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537715  253603 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.537721  253603 addons.go:165] addon storage-provisioner should already be in state true
	W0531 18:01:04.537727  253603 addons.go:165] addon metrics-server should already be in state true
	I0531 18:01:04.537767  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537777  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537687  253603 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537727  253603 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537814  253603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531175602-6903"
	W0531 18:01:04.537839  253603 addons.go:165] addon dashboard should already be in state true
	I0531 18:01:04.537886  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.538099  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538258  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538288  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538354  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.582251  253603 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:01:04.583780  253603 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.585078  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:01:04.585101  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:01:04.585148  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.586519  253603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:01:04.588458  253603 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.589819  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:01:04.589835  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:01:04.589870  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.588540  253603 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.589914  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:01:04.589608  253603 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.589994  253603 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:01:04.590025  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.590456  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.589970  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.622440  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:01:04.622511  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:01:04.622642  253603 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 18:01:04.633508  253603 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.633529  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:01:04.633581  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.633878  253603 api_server.go:71] duration metric: took 100.025723ms to wait for apiserver process to appear ...
	I0531 18:01:04.633902  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:01:04.633915  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:04.636308  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.639626  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:04.640522  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:04.640542  253603 api_server.go:130] duration metric: took 6.632874ms to wait for apiserver health ...
	I0531 18:01:04.640552  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.641487  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.650123  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.651235  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.651429  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651499  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.651514  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.651525  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.651537  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.651547  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.651557  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.651565  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651574  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651580  253603 system_pods.go:74] duration metric: took 11.022992ms to wait for pod list to return data ...
	I0531 18:01:04.651588  253603 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:01:04.653854  253603 default_sa.go:45] found service account: "default"
	I0531 18:01:04.653878  253603 default_sa.go:55] duration metric: took 2.284188ms for default service account to be created ...
	I0531 18:01:04.653893  253603 kubeadm.go:572] duration metric: took 120.041989ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 18:01:04.653922  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.656488  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.656514  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.656527  253603 node_conditions.go:105] duration metric: took 2.599307ms to run NodePressure ...
	I0531 18:01:04.656538  253603 start.go:213] waiting for startup goroutines ...
	I0531 18:01:04.673010  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.728342  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:01:04.728368  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:01:04.736428  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:01:04.736451  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:01:04.742828  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:01:04.742852  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:01:04.746024  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.750055  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:01:04.750076  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:01:04.758284  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.758304  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:01:04.801922  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:01:04.801947  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:01:04.802275  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.807930  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.820976  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:01:04.821004  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:01:04.911836  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:01:04.911866  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:01:04.931751  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:01:04.931779  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:01:05.022410  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:01:05.022437  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:01:05.105670  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:01:05.105701  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:01:05.123433  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.123460  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:01:05.202647  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.305415  253603 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531175602-6903"
	I0531 18:01:05.471026  253603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:01:05.472226  253603 addons.go:417] enableAddons completed in 938.375737ms
	I0531 18:01:05.510490  253603 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0531 18:01:05.512509  253603 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531175602-6903" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8a8b463da962d       4c03754524064       5 seconds ago        Running             kube-proxy                1                   1adfca4938f11
	4146cba57a03a       6de166512aa22       5 seconds ago        Running             kindnet-cni               1                   f47130c824303
	096e907429e1c       25f8c7f3da61c       10 seconds ago       Running             etcd                      1                   efcea4a25d2db
	c7007061d9990       595f327f224a4       10 seconds ago       Running             kube-scheduler            1                   46eeb0ab4af95
	c585073fed2ff       8fa62c12256df       10 seconds ago       Running             kube-apiserver            1                   fad994a20d15c
	a75ee54116b38       df7b72818ad2e       10 seconds ago       Running             kube-controller-manager   1                   edb465ab109ce
	776259150b44a       6de166512aa22       58 seconds ago       Exited              kindnet-cni               0                   22feb6e4c9d92
	e8f79a0b14e7b       4c03754524064       58 seconds ago       Exited              kube-proxy                0                   05d422ce6353d
	8eda42b95092e       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   2a956c579ce23
	2dfc683928eed       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   afdb62e6e1a44
	f3a9c3a521d42       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   a2c7f4ff33605
	6720e78c02157       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   9a5652b80be82
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:00:32 UTC, end at Tue 2022-05-31 18:01:09 UTC. --
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.579355662Z" level=info msg="StopPodSandbox for \"22feb6e4c9d926e1f16066d611274efc2b8fe4c8f80c4c19aa549218f818a02b\" returns successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.580030347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-dfhrt,Uid:d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07,Namespace:kube-system,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.596936102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597016853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597030478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597319541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d pid=1399 runtime=io.containerd.runc.v2
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877635106Z" level=info msg="StopPodSandbox for \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877733761Z" level=info msg="Container to stop \"e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877838186Z" level=info msg="TearDown network for sandbox \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\" successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877858644Z" level=info msg="StopPodSandbox for \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\" returns successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.878450615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44xvh,Uid:aa6aeb24-8d4b-4960-99a0-65d0493743bf,Namespace:kube-system,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894124118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894196191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894205732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894475889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840 pid=1433 runtime=io.containerd.runc.v2
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.943498913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-dfhrt,Uid:d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07,Namespace:kube-system,Attempt:1,} returns sandbox id \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.946290770Z" level=info msg="CreateContainer within sandbox \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.955481049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44xvh,Uid:aa6aeb24-8d4b-4960-99a0-65d0493743bf,Namespace:kube-system,Attempt:1,} returns sandbox id \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.957837315Z" level=info msg="CreateContainer within sandbox \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.959990920Z" level=info msg="CreateContainer within sandbox \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.960563285Z" level=info msg="StartContainer for \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.005681930Z" level=info msg="CreateContainer within sandbox \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.006229959Z" level=info msg="StartContainer for \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.131190966Z" level=info msg="StartContainer for \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\" returns successfully"
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.201487884Z" level=info msg="StartContainer for \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220531175602-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220531175602-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=newest-cni-20220531175602-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_59_58_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:59:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220531175602-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:01:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220531175602-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                a15f53f9-4c24-45d7-81a0-f7f59ad7b293
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-20220531175602-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-dfhrt                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      59s
	  kube-system                 kube-apiserver-newest-cni-20220531175602-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-newest-cni-20220531175602-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-44xvh                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-newest-cni-20220531175602-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 58s                kube-proxy  
	  Normal  Starting                 5s                 kube-proxy  
	  Normal  Starting                 72s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [096e907429e1c8fe3793e3ef5f1669af5784010ee0fd39e557b3cf212ed29a14] <==
	* {"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:00:58.933Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531175602-6903 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:01:00.525Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:01:00.525Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:01:00.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:01:00.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> etcd [6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b] <==
	* {"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531175602-6903 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:59:52.028Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:01:09 up  1:43,  0 users,  load average: 1.09, 1.03, 1.52
	Linux newest-cni-20220531175602-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914] <==
	* I0531 17:59:54.338548       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:59:54.338564       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:59:54.338682       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:59:54.402004       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:59:54.402332       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:59:55.237103       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:59:55.237130       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:59:55.242098       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:59:55.245112       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:59:55.245131       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:59:55.615274       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:59:55.645134       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:59:55.736778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:59:55.741300       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 17:59:55.742203       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:59:55.745338       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:59:56.369085       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:59:57.362792       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:59:57.371299       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:59:57.382886       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:59:57.533219       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:00:09.372858       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:00:10.222431       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:00:11.125409       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:00:11.523393       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.240.49]
	
	* 
	* ==> kube-apiserver [c585073fed2ff09bf2a22fb5caae163971fa5d47d1c2a4dd91a16f7fe77baa1e] <==
	* E0531 18:01:02.217704       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 18:01:02.301399       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 18:01:02.301401       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 18:01:02.301434       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:01:02.303637       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:01:02.303682       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:01:02.303641       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:01:02.304710       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 18:01:02.320890       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:01:03.088410       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:01:03.088435       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:01:03.093873       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0531 18:01:03.329756       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:01:03.329822       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:01:03.329831       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:01:04.327260       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:01:04.382910       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:01:04.464666       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:01:04.473472       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:01:04.507628       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:01:04.512703       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:01:05.383987       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 18:01:05.453414       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.102.255.125]
	I0531 18:01:05.463854       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.171.112]
	
	* 
	* ==> kube-controller-manager [8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620] <==
	* I0531 18:00:09.417121       1 shared_informer.go:247] Caches are synced for GC 
	I0531 18:00:09.435053       1 shared_informer.go:247] Caches are synced for node 
	I0531 18:00:09.435076       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 18:00:09.435081       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0531 18:00:09.435087       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 18:00:09.438911       1 range_allocator.go:374] Set node newest-cni-20220531175602-6903 PodCIDR to [192.168.0.0/24]
	I0531 18:00:09.461195       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 18:00:09.473856       1 shared_informer.go:247] Caches are synced for expand 
	I0531 18:00:09.477055       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:00:09.479165       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 18:00:09.504634       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:00:09.516907       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 18:00:09.518174       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 18:00:09.526157       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 18:00:09.747467       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 18:00:09.933062       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:00:09.981790       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:00:09.981829       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:00:10.175792       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-xx57r"
	I0531 18:00:10.182390       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-lv5sq"
	I0531 18:00:10.198396       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-xx57r"
	I0531 18:00:10.227411       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dfhrt"
	I0531 18:00:10.228692       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-44xvh"
	I0531 18:00:11.435224       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:00:11.440137       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-64wmz"
	
	* 
	* ==> kube-controller-manager [a75ee54116b3818460d131664635ff3fd9d57a25adc130516692fa394709cdf0] <==
	* I0531 18:01:05.955979       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0531 18:01:05.956205       1 controllermanager.go:605] Started "csrsigning"
	I0531 18:01:05.956301       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0531 18:01:05.956319       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0531 18:01:05.956373       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0531 18:01:05.957782       1 controllermanager.go:605] Started "ttl"
	I0531 18:01:05.957914       1 ttl_controller.go:121] Starting TTL controller
	I0531 18:01:05.957930       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0531 18:01:05.959446       1 node_lifecycle_controller.go:377] Sending events to api server.
	I0531 18:01:05.959621       1 taint_manager.go:163] "Sending events to api server"
	I0531 18:01:05.959706       1 node_lifecycle_controller.go:505] Controller will reconcile labels.
	I0531 18:01:05.959739       1 controllermanager.go:605] Started "nodelifecycle"
	I0531 18:01:05.959837       1 node_lifecycle_controller.go:539] Starting node controller
	I0531 18:01:05.959854       1 shared_informer.go:240] Waiting for caches to sync for taint
	E0531 18:01:05.976183       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0531 18:01:05.976295       1 controllermanager.go:605] Started "namespace"
	I0531 18:01:05.976375       1 namespace_controller.go:200] Starting namespace controller
	I0531 18:01:05.976392       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0531 18:01:05.980995       1 garbagecollector.go:146] Starting garbage collector controller
	I0531 18:01:05.981015       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0531 18:01:05.981032       1 graph_builder.go:289] GraphBuilder running
	I0531 18:01:05.981142       1 controllermanager.go:605] Started "garbagecollector"
	I0531 18:01:05.982753       1 node_ipam_controller.go:91] Sending events to api server.
	W0531 18:01:06.010116       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0531 18:01:06.019170       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622] <==
	* I0531 18:01:04.239527       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:01:04.239576       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:01:04.239612       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:01:04.323325       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:01:04.323362       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:01:04.323373       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:01:04.323391       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:01:04.324088       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:01:04.324661       1 config.go:317] "Starting service config controller"
	I0531 18:01:04.324697       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:01:04.324816       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:01:04.324871       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:01:04.425669       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:01:04.425694       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0] <==
	* I0531 18:00:11.057248       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:00:11.057321       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:00:11.057358       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:00:11.122456       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:00:11.122493       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:00:11.122501       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:00:11.122515       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:00:11.122916       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:00:11.123553       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:00:11.123571       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:00:11.123610       1 config.go:317] "Starting service config controller"
	I0531 18:00:11.123626       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:00:11.224155       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:00:11.224156       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c7007061d9990705414aa788d4bd89fc3051b5962e1daba8b7ce2346104acf7d] <==
	* W0531 18:00:59.005610       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0531 18:00:59.541543       1 serving.go:348] Generated self-signed cert in-memory
	W0531 18:01:02.130387       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 18:01:02.130633       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:01:02.130768       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 18:01:02.130859       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 18:01:02.210765       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 18:01:02.212554       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 18:01:02.212672       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 18:01:02.212681       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:01:02.212702       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0531 18:01:02.218510       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0531 18:01:02.218572       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0531 18:01:02.218682       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0531 18:01:02.218698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0531 18:01:02.218995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0531 18:01:02.219033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0531 18:01:02.221244       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E0531 18:01:02.221290       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	I0531 18:01:03.313191       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9] <==
	* W0531 17:59:54.325644       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:59:54.325915       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:59:54.325927       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:59:54.325988       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.326090       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:59:54.326341       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327312       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.327329       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:59:54.327350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:59:54.327353       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:59:54.327365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:59:54.327366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:59:54.326452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327383       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:59:54.327393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.327594       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:59:54.327849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:59:55.221179       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:55.221220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:55.247249       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:55.247278       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:55.426699       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:59:55.426740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0531 17:59:55.921497       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:00:32 UTC, end at Tue 2022-05-31 18:01:09 UTC. --
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.716229     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.816780     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.917202     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:02.017981     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.118827     911 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.119678     911 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:02.120073     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.315441     911 kubelet_node_status.go:108] "Node was previously registered" node="newest-cni-20220531175602-6903"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.315571     911 kubelet_node_status.go:73] "Successfully registered node" node="newest-cni-20220531175602-6903"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.948092     911 apiserver.go:52] "Watching apiserver"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.951620     911 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.951888     911 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:03.037456     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126802     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa6aeb24-8d4b-4960-99a0-65d0493743bf-kube-proxy\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126852     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa6aeb24-8d4b-4960-99a0-65d0493743bf-lib-modules\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126978     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-cni-cfg\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127018     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9gl\" (UniqueName: \"kubernetes.io/projected/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-kube-api-access-mw9gl\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127051     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-lib-modules\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127081     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa6aeb24-8d4b-4960-99a0-65d0493743bf-xtables-lock\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127110     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-xtables-lock\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127164     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snz6z\" (UniqueName: \"kubernetes.io/projected/aa6aeb24-8d4b-4960-99a0-65d0493743bf-kube-api-access-snz6z\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127202     911 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.037902     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.048119     911 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.048163     911 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220531175602-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner: exit status 1 (50.147929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-lv5sq" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-64wmz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220531175602-6903
helpers_test.go:235: (dbg) docker inspect newest-cni-20220531175602-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71",
	        "Created": "2022-05-31T17:59:33.649637794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253885,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:00:32.570188094Z",
	            "FinishedAt": "2022-05-31T18:00:31.334161439Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/hostname",
	        "HostsPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/hosts",
	        "LogPath": "/var/lib/docker/containers/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71/de51a3963e61e92e8de8d00d851ac0d66eee80f2c801aba1a03011b937c56e71-json.log",
	        "Name": "/newest-cni-20220531175602-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220531175602-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220531175602-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f5edb84c83211fb13309e40f012e6d551140253c93c4590c4a1f80563f5c1ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220531175602-6903",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220531175602-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220531175602-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220531175602-6903",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220531175602-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72d03a9307c3cc13ada83f1b0caab90d8bcec4f331c358a3e12a4a2308c24a6a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72d03a9307c3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220531175602-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de51a3963e61",
	                        "newest-cni-20220531175602-6903"
	                    ],
	                    "NetworkID": "8293cc9ba146f6498f2356f2bf1d8638ecf22835b98f3215a084a1bee9850a46",
	                    "EndpointID": "cd5311c17f42b411903f14e0feb1b3d1001a1339a5f91c6d4358fd443f0907f8",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20220531175602-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:45 UTC | 31 May 22 17:49 UTC |
	|         | --memory=2048                                              |                                                |         |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                     |                     |
	|         | --cni=bridge --driver=docker                               |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	| ssh     | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:49 UTC | 31 May 22 17:49 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |                |                     |                     |
	| logs    | calico-20220531174030-6903                                 | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p calico-20220531174030-6903                              | calico-20220531174030-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20220531175323-6903      | jenkins | v1.26.0-beta.1 | 31 May 22 17:53 UTC | 31 May 22 17:53 UTC |
	|         | disable-driver-mounts-20220531175323-6903                  |                                                |         |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:47 UTC | 31 May 22 17:54 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |         |                |                     |                     |
	|         | --keep-context=false                                       |                                                |         |                |                     |                     |
	|         | --driver=docker                                            |                                                |         |                |                     |                     |
	|         | --container-runtime=containerd                             |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531174534-6903            | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | old-k8s-version-20220531174534-6903                        |                                                |         |                |                     |                     |
	| logs    | enable-default-cni-20220531174029-6903                     | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:55 UTC | 31 May 22 17:55 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | bridge-20220531174029-6903                                 | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220531174029-6903         | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	|         | enable-default-cni-20220531174029-6903                     |                                                |         |                |                     |                     |
	| delete  | -p bridge-20220531174029-6903                              | bridge-20220531174029-6903                     | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 17:56 UTC |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:58 UTC | 31 May 22 17:58 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 17:59 UTC | 31 May 22 17:59 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 17:56 UTC | 31 May 22 18:00 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:00:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:00:31.855034  253603 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:00:31.855128  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855137  253603 out.go:309] Setting ErrFile to fd 2...
	I0531 18:00:31.855169  253603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:00:31.855275  253603 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:00:31.855500  253603 out.go:303] Setting JSON to false
	I0531 18:00:31.857002  253603 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6183,"bootTime":1654013849,"procs":755,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:00:31.857065  253603 start.go:125] virtualization: kvm guest
	I0531 18:00:31.859650  253603 out.go:177] * [newest-cni-20220531175602-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:00:31.861106  253603 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:00:31.861145  253603 notify.go:193] Checking for updates...
	I0531 18:00:31.863620  253603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:00:31.865010  253603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:31.866391  253603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:00:31.867875  253603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:00:31.871501  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:31.872091  253603 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:00:31.913476  253603 docker.go:137] docker version: linux-20.10.16
	I0531 18:00:31.913607  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.012796  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:31.941581138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.012892  253603 docker.go:254] overlay module found
	I0531 18:00:32.015694  253603 out.go:177] * Using the docker driver based on existing profile
	I0531 18:00:32.016948  253603 start.go:284] selected driver: docker
	I0531 18:00:32.016961  253603 start.go:806] validating driver "docker" against &{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAd
donRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.017071  253603 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:00:32.017980  253603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:00:32.118816  253603 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 18:00:32.047560918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:00:32.119131  253603 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0531 18:00:32.119167  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:32.119175  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:32.119195  253603 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119208  253603 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 18:00:32.119215  253603 start_flags.go:306] config:
	{Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true
apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:32.122424  253603 out.go:177] * Starting control plane node newest-cni-20220531175602-6903 in cluster newest-cni-20220531175602-6903
	I0531 18:00:32.123755  253603 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:00:32.125291  253603 out.go:177] * Pulling base image ...
	I0531 18:00:32.126765  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:32.126808  253603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:00:32.126822  253603 cache.go:57] Caching tarball of preloaded images
	I0531 18:00:32.126856  253603 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:00:32.127020  253603 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:00:32.127034  253603 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:00:32.127170  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.176155  253603 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:00:32.176180  253603 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:00:32.176199  253603 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:00:32.176233  253603 start.go:352] acquiring machines lock for newest-cni-20220531175602-6903: {Name:mk17d90b6d3b0fde22fd963b3786e868dc154060 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:00:32.176322  253603 start.go:356] acquired machines lock for "newest-cni-20220531175602-6903" in 69.182µs
	I0531 18:00:32.176340  253603 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:00:32.176344  253603 fix.go:55] fixHost starting: 
	I0531 18:00:32.176560  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.209761  253603 fix.go:103] recreateIfNeeded on newest-cni-20220531175602-6903: state=Stopped err=<nil>
	W0531 18:00:32.209791  253603 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:00:32.212875  253603 out.go:177] * Restarting existing docker container for "newest-cni-20220531175602-6903" ...
	I0531 18:00:30.443775  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.444063  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:32.214225  253603 cli_runner.go:164] Run: docker start newest-cni-20220531175602-6903
	I0531 18:00:32.577327  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:00:32.610657  253603 kic.go:416] container "newest-cni-20220531175602-6903" state is running.
	I0531 18:00:32.611011  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:32.643675  253603 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/config.json ...
	I0531 18:00:32.643905  253603 machine.go:88] provisioning docker machine ...
	I0531 18:00:32.643932  253603 ubuntu.go:169] provisioning hostname "newest-cni-20220531175602-6903"
	I0531 18:00:32.643983  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:32.674555  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:32.674809  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:32.674837  253603 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220531175602-6903 && echo "newest-cni-20220531175602-6903" | sudo tee /etc/hostname
	I0531 18:00:32.675642  253603 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46432->127.0.0.1:49427: read: connection reset by peer
	I0531 18:00:35.795562  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220531175602-6903
	
	I0531 18:00:35.795625  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:35.826982  253603 main.go:134] libmachine: Using SSH client type: native
	I0531 18:00:35.827166  253603 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0531 18:00:35.827189  253603 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220531175602-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220531175602-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220531175602-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:00:35.938582  253603 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:00:35.938614  253603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:00:35.938689  253603 ubuntu.go:177] setting up certificates
	I0531 18:00:35.938700  253603 provision.go:83] configureAuth start
	I0531 18:00:35.938739  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:35.970778  253603 provision.go:138] copyHostCerts
	I0531 18:00:35.970836  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:00:35.970855  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:00:35.970915  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:00:35.971070  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:00:35.971088  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:00:35.971129  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:00:35.971236  253603 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:00:35.971254  253603 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:00:35.971287  253603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:00:35.971355  253603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220531175602-6903 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220531175602-6903]
	I0531 18:00:36.142238  253603 provision.go:172] copyRemoteCerts
	I0531 18:00:36.142291  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:00:36.142320  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.173472  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.254066  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:00:36.271055  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:00:36.287105  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:00:36.302927  253603 provision.go:86] duration metric: configureAuth took 364.217481ms
	I0531 18:00:36.302948  253603 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:00:36.303122  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:00:36.303134  253603 machine.go:91] provisioned docker machine in 3.659215237s
	I0531 18:00:36.303168  253603 start.go:306] post-start starting for "newest-cni-20220531175602-6903" (driver="docker")
	I0531 18:00:36.303175  253603 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:00:36.303216  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:00:36.303261  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.335634  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.418002  253603 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:00:36.420669  253603 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:00:36.420693  253603 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:00:36.420701  253603 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:00:36.420706  253603 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:00:36.420719  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:00:36.420765  253603 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:00:36.420825  253603 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:00:36.420897  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:00:36.427208  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:36.443819  253603 start.go:309] post-start completed in 140.639246ms
	I0531 18:00:36.443888  253603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:00:36.443930  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.477971  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.555314  253603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:00:36.559129  253603 fix.go:57] fixHost completed within 4.38277864s
	I0531 18:00:36.559171  253603 start.go:81] releasing machines lock for "newest-cni-20220531175602-6903", held for 4.382836668s
	I0531 18:00:36.559246  253603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220531175602-6903
	I0531 18:00:36.590986  253603 ssh_runner.go:195] Run: systemctl --version
	I0531 18:00:36.591023  253603 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:00:36.591084  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.591027  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:00:36.624550  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.625023  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:00:36.722476  253603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:00:36.732794  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:00:36.741236  253603 docker.go:187] disabling docker service ...
	I0531 18:00:36.741281  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:00:36.757377  253603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:00:36.765762  253603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:00:36.850081  253603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:00:34.943765  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.944411  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:39.443721  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:36.930380  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:00:36.938984  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:00:36.951805  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:00:36.964223  253603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:00:36.970217  253603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:00:36.976123  253603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:00:37.050759  253603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:00:37.133255  253603 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:00:37.133326  253603 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:00:37.136650  253603 start.go:468] Will wait 60s for crictl version
	I0531 18:00:37.136705  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:37.162540  253603 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:00:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:00:41.943597  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:43.944098  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.209660  253603 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:00:48.232631  253603 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:00:48.232687  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.260476  253603 ssh_runner.go:195] Run: containerd --version
	I0531 18:00:48.288516  253603 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:00:48.289983  253603 cli_runner.go:164] Run: docker network inspect newest-cni-20220531175602-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:00:48.321110  253603 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 18:00:48.324362  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.335260  253603 out.go:177]   - kubelet.network-plugin=cni
	I0531 18:00:48.336944  253603 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0531 18:00:48.338457  253603 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:00:46.442937  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:48.443904  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.444077  243743 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:00:50.446109  243743 node_ready.go:38] duration metric: took 4m0.008452547s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:00:50.448431  243743 out.go:177] 
	W0531 18:00:50.449997  243743 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:00:50.450021  243743 out.go:239] * 
	W0531 18:00:50.450791  243743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:00:50.452520  243743 out.go:177] 
	I0531 18:00:48.339824  253603 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:00:48.339884  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.363681  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.363700  253603 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:00:48.363745  253603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:00:48.385839  253603 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:00:48.385856  253603 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:00:48.385893  253603 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:00:48.408057  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:00:48.408077  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:00:48.408091  253603 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0531 18:00:48.408103  253603 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220531175602-6903 NodeName:newest-cni-20220531175602-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect
:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:00:48.408230  253603 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220531175602-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:00:48.408307  253603 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220531175602-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:00:48.408350  253603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:00:48.414874  253603 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:00:48.414928  253603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:00:48.421138  253603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
	I0531 18:00:48.433792  253603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:00:48.447663  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2195 bytes)
	I0531 18:00:48.459853  253603 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:00:48.462496  253603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:00:48.470850  253603 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903 for IP: 192.168.58.2
	I0531 18:00:48.470935  253603 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:00:48.470970  253603 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:00:48.471030  253603 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/client.key
	I0531 18:00:48.471080  253603 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key.cee25041
	I0531 18:00:48.471114  253603 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key
	I0531 18:00:48.471247  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:00:48.471280  253603 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:00:48.471292  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:00:48.471322  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:00:48.471348  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:00:48.471369  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:00:48.471406  253603 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:00:48.471990  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:00:48.487996  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:00:48.504050  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:00:48.520129  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/newest-cni-20220531175602-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:00:48.536197  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:00:48.551773  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:00:48.567698  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:00:48.583534  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:00:48.599284  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:00:48.615488  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:00:48.631736  253603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:00:48.648044  253603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:00:48.659819  253603 ssh_runner.go:195] Run: openssl version
	I0531 18:00:48.664514  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:00:48.671684  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674554  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.674592  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:00:48.678953  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:00:48.685183  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:00:48.691850  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694734  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.694775  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:00:48.699108  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:00:48.705843  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:00:48.713797  253603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716588  253603 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.716628  253603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:00:48.720988  253603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:00:48.727223  253603 kubeadm.go:395] StartCluster: {Name:newest-cni-20220531175602-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220531175602-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Me
tricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:00:48.727350  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:00:48.727391  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:48.751975  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:48.751998  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:48.752009  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:48.752025  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:48.752038  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:48.752051  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:48.752060  253603 cri.go:87] found id: ""
	I0531 18:00:48.752094  253603 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:00:48.763086  253603 cri.go:114] JSON = null
	W0531 18:00:48.763128  253603 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:00:48.763217  253603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:00:48.769482  253603 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:00:48.769502  253603 kubeadm.go:626] restartCluster start
	I0531 18:00:48.769537  253603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:00:48.775590  253603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.776475  253603 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220531175602-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:00:48.777108  253603 kubeconfig.go:127] "newest-cni-20220531175602-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:00:48.777968  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:00:48.779498  253603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:00:48.785488  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.785519  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:48.793052  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:48.993429  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:48.993482  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.001612  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.193914  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.193974  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.202307  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.393581  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.393647  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.401876  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.594165  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.594228  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.602448  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.793873  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.793934  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:49.802272  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:49.993549  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:49.993606  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.002105  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.193422  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.193478  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.201805  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.394099  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.394197  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.402406  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.593662  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.593737  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.602754  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.794037  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.794083  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:50.803034  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:50.993253  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:50.993322  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.002295  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.193608  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.193667  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.201663  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.393968  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.394033  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.402169  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.593519  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.593576  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.602288  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.793534  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.793598  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.803943  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.803964  253603 api_server.go:165] Checking apiserver status ...
	I0531 18:00:51.803995  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:00:51.812522  253603 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.812554  253603 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:00:51.812560  253603 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:00:51.812574  253603 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:00:51.812615  253603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:00:51.839954  253603 cri.go:87] found id: "776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af"
	I0531 18:00:51.839976  253603 cri.go:87] found id: "e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0"
	I0531 18:00:51.839982  253603 cri.go:87] found id: "8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620"
	I0531 18:00:51.839989  253603 cri.go:87] found id: "2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914"
	I0531 18:00:51.839994  253603 cri.go:87] found id: "f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9"
	I0531 18:00:51.840001  253603 cri.go:87] found id: "6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b"
	I0531 18:00:51.840013  253603 cri.go:87] found id: ""
	I0531 18:00:51.840018  253603 cri.go:232] Stopping containers: [776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b]
	I0531 18:00:51.840059  253603 ssh_runner.go:195] Run: which crictl
	I0531 18:00:51.842973  253603 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 776259150b44a3a86234a700ba1627beae1388af410df8e48b58b75374e307af e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0 8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620 2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914 f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9 6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b
	I0531 18:00:51.869603  253603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:00:51.880644  253603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:00:51.887664  253603 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:59 /etc/kubernetes/scheduler.conf
	
	I0531 18:00:51.887720  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:00:51.894538  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:00:51.901534  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.908371  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.908424  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:00:51.917592  253603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:00:51.925101  253603 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:00:51.925151  253603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:00:51.931258  253603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937908  253603 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:00:51.937925  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:51.981409  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.730818  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.866579  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.918070  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:00:52.960507  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:00:52.960554  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.469301  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:53.969201  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.469096  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:54.968777  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.468873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:55.968873  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.468973  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:56.969026  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.468917  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:57.968887  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.469411  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:58.969742  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:00:59.011037  253603 api_server.go:71] duration metric: took 6.050532367s to wait for apiserver process to appear ...
	I0531 18:00:59.011067  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:00:59.011079  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:00:59.011494  253603 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0531 18:00:59.512207  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.105106  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:01:02.105133  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:01:02.512478  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:02.516889  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:02.516910  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.012313  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.016705  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:01:03.016731  253603 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:01:03.512288  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:03.516555  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:03.522009  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:03.522027  253603 api_server.go:130] duration metric: took 4.510954896s to wait for apiserver health ...
	I0531 18:01:03.522036  253603 cni.go:95] Creating CNI manager for ""
	I0531 18:01:03.522043  253603 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:01:03.524134  253603 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:01:03.525439  253603 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:01:03.529095  253603 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:01:03.529112  253603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:01:03.541449  253603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:01:04.388379  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.394833  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.394868  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394878  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.394887  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.394895  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.394908  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.394914  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.394927  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.394933  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394938  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.394945  253603 system_pods.go:74] duration metric: took 6.541942ms to wait for pod list to return data ...
	I0531 18:01:04.394952  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.397297  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.397318  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.397328  253603 node_conditions.go:105] duration metric: took 2.369222ms to run NodePressure ...
	I0531 18:01:04.397343  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:01:04.522242  253603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:01:04.528860  253603 ops.go:34] apiserver oom_adj: -16
	I0531 18:01:04.528888  253603 kubeadm.go:630] restartCluster took 15.759378612s
	I0531 18:01:04.528897  253603 kubeadm.go:397] StartCluster complete in 15.801681788s
	I0531 18:01:04.528917  253603 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.529033  253603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:01:04.530679  253603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:01:04.533767  253603 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531175602-6903" rescaled to 1
	I0531 18:01:04.533818  253603 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:01:04.533838  253603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:01:04.536326  253603 out.go:177] * Verifying Kubernetes components...
	I0531 18:01:04.533856  253603 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 18:01:04.534015  253603 config.go:178] Loaded profile config "newest-cni-20220531175602-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:01:04.537649  253603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:01:04.537683  253603 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537700  253603 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537713  253603 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537715  253603 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.537721  253603 addons.go:165] addon storage-provisioner should already be in state true
	W0531 18:01:04.537727  253603 addons.go:165] addon metrics-server should already be in state true
	I0531 18:01:04.537767  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537777  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.537687  253603 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531175602-6903"
	I0531 18:01:04.537727  253603 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531175602-6903"
	I0531 18:01:04.537814  253603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531175602-6903"
	W0531 18:01:04.537839  253603 addons.go:165] addon dashboard should already be in state true
	I0531 18:01:04.537886  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.538099  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538258  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538288  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.538354  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.582251  253603 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:01:04.583780  253603 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.585078  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:01:04.585101  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:01:04.585148  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.586519  253603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:01:04.588458  253603 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:01:04.589819  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:01:04.589835  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:01:04.589870  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.588540  253603 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.589914  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:01:04.589608  253603 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531175602-6903"
	W0531 18:01:04.589994  253603 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:01:04.590025  253603 host.go:66] Checking if "newest-cni-20220531175602-6903" exists ...
	I0531 18:01:04.590456  253603 cli_runner.go:164] Run: docker container inspect newest-cni-20220531175602-6903 --format={{.State.Status}}
	I0531 18:01:04.589970  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.622440  253603 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:01:04.622511  253603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:01:04.622642  253603 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 18:01:04.633508  253603 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.633529  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:01:04.633581  253603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531175602-6903
	I0531 18:01:04.633878  253603 api_server.go:71] duration metric: took 100.025723ms to wait for apiserver process to appear ...
	I0531 18:01:04.633902  253603 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:01:04.633915  253603 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 18:01:04.636308  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.639626  253603 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 18:01:04.640522  253603 api_server.go:140] control plane version: v1.23.6
	I0531 18:01:04.640542  253603 api_server.go:130] duration metric: took 6.632874ms to wait for apiserver health ...
	I0531 18:01:04.640552  253603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:01:04.641487  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.650123  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.651235  253603 system_pods.go:59] 9 kube-system pods found
	I0531 18:01:04.651429  253603 system_pods.go:61] "coredns-64897985d-lv5sq" [203ef58b-2708-4651-ab86-861f5fa69372] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651499  253603 system_pods.go:61] "etcd-newest-cni-20220531175602-6903" [5f7cb658-1ceb-4c47-8bee-effc9614aa23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:01:04.651514  253603 system_pods.go:61] "kindnet-dfhrt" [d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:01:04.651525  253603 system_pods.go:61] "kube-apiserver-newest-cni-20220531175602-6903" [d67d50e2-7deb-4c61-81a8-36012352da0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:01:04.651537  253603 system_pods.go:61] "kube-controller-manager-newest-cni-20220531175602-6903" [d1b6d2af-6fb6-4c89-bfbf-d40d8c24f4d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:01:04.651547  253603 system_pods.go:61] "kube-proxy-44xvh" [aa6aeb24-8d4b-4960-99a0-65d0493743bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:01:04.651557  253603 system_pods.go:61] "kube-scheduler-newest-cni-20220531175602-6903" [5f2f1f7a-3268-4649-9525-4849378243c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:01:04.651565  253603 system_pods.go:61] "metrics-server-b955d9d8-64wmz" [9995c513-1795-474c-833a-e60e61dc0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651574  253603 system_pods.go:61] "storage-provisioner" [368826ec-ecba-4116-971f-0e75506e77bd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:01:04.651580  253603 system_pods.go:74] duration metric: took 11.022992ms to wait for pod list to return data ...
	I0531 18:01:04.651588  253603 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:01:04.653854  253603 default_sa.go:45] found service account: "default"
	I0531 18:01:04.653878  253603 default_sa.go:55] duration metric: took 2.284188ms for default service account to be created ...
	I0531 18:01:04.653893  253603 kubeadm.go:572] duration metric: took 120.041989ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 18:01:04.653922  253603 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:01:04.656488  253603 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:01:04.656514  253603 node_conditions.go:123] node cpu capacity is 8
	I0531 18:01:04.656527  253603 node_conditions.go:105] duration metric: took 2.599307ms to run NodePressure ...
	I0531 18:01:04.656538  253603 start.go:213] waiting for startup goroutines ...
	I0531 18:01:04.673010  253603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/newest-cni-20220531175602-6903/id_rsa Username:docker}
	I0531 18:01:04.728342  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:01:04.728368  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:01:04.736428  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:01:04.736451  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:01:04.742828  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:01:04.742852  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:01:04.746024  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:01:04.750055  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:01:04.750076  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:01:04.758284  253603 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.758304  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:01:04.801922  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:01:04.801947  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:01:04.802275  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:01:04.807930  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:01:04.820976  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:01:04.821004  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:01:04.911836  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:01:04.911866  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:01:04.931751  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:01:04.931779  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:01:05.022410  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:01:05.022437  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:01:05.105670  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:01:05.105701  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:01:05.123433  253603 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.123460  253603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:01:05.202647  253603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:01:05.305415  253603 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531175602-6903"
	I0531 18:01:05.471026  253603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:01:05.472226  253603 addons.go:417] enableAddons completed in 938.375737ms
	I0531 18:01:05.510490  253603 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0531 18:01:05.512509  253603 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531175602-6903" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8a8b463da962d       4c03754524064       7 seconds ago        Running             kube-proxy                1                   1adfca4938f11
	4146cba57a03a       6de166512aa22       7 seconds ago        Running             kindnet-cni               1                   f47130c824303
	096e907429e1c       25f8c7f3da61c       12 seconds ago       Running             etcd                      1                   efcea4a25d2db
	c7007061d9990       595f327f224a4       12 seconds ago       Running             kube-scheduler            1                   46eeb0ab4af95
	c585073fed2ff       8fa62c12256df       12 seconds ago       Running             kube-apiserver            1                   fad994a20d15c
	a75ee54116b38       df7b72818ad2e       12 seconds ago       Running             kube-controller-manager   1                   edb465ab109ce
	776259150b44a       6de166512aa22       About a minute ago   Exited              kindnet-cni               0                   22feb6e4c9d92
	e8f79a0b14e7b       4c03754524064       About a minute ago   Exited              kube-proxy                0                   05d422ce6353d
	8eda42b95092e       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   2a956c579ce23
	2dfc683928eed       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   afdb62e6e1a44
	f3a9c3a521d42       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   a2c7f4ff33605
	6720e78c02157       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   9a5652b80be82
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:00:32 UTC, end at Tue 2022-05-31 18:01:11 UTC. --
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.579355662Z" level=info msg="StopPodSandbox for \"22feb6e4c9d926e1f16066d611274efc2b8fe4c8f80c4c19aa549218f818a02b\" returns successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.580030347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-dfhrt,Uid:d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07,Namespace:kube-system,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.596936102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597016853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597030478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.597319541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d pid=1399 runtime=io.containerd.runc.v2
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877635106Z" level=info msg="StopPodSandbox for \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877733761Z" level=info msg="Container to stop \"e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877838186Z" level=info msg="TearDown network for sandbox \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\" successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.877858644Z" level=info msg="StopPodSandbox for \"05d422ce6353d7662e03640c5d5420ecbeaa5e19c88644e89090ef4a361695c3\" returns successfully"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.878450615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44xvh,Uid:aa6aeb24-8d4b-4960-99a0-65d0493743bf,Namespace:kube-system,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894124118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894196191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894205732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.894475889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840 pid=1433 runtime=io.containerd.runc.v2
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.943498913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-dfhrt,Uid:d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07,Namespace:kube-system,Attempt:1,} returns sandbox id \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.946290770Z" level=info msg="CreateContainer within sandbox \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.955481049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44xvh,Uid:aa6aeb24-8d4b-4960-99a0-65d0493743bf,Namespace:kube-system,Attempt:1,} returns sandbox id \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.957837315Z" level=info msg="CreateContainer within sandbox \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.959990920Z" level=info msg="CreateContainer within sandbox \"f47130c8243033754bf289854cfdaaabfb00d433590571253f9250a6e5b0a67d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\""
	May 31 18:01:03 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:03.960563285Z" level=info msg="StartContainer for \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.005681930Z" level=info msg="CreateContainer within sandbox \"1adfca4938f11e99c70e225d5379bfb589ee80b1fba287a03211e3014734d840\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.006229959Z" level=info msg="StartContainer for \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\""
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.131190966Z" level=info msg="StartContainer for \"4146cba57a03aa90082db1c35fb586e817db3fc06d6388a6df167ea60f1c0329\" returns successfully"
	May 31 18:01:04 newest-cni-20220531175602-6903 containerd[528]: time="2022-05-31T18:01:04.201487884Z" level=info msg="StartContainer for \"8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220531175602-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220531175602-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=newest-cni-20220531175602-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_59_58_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:59:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220531175602-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:01:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:01:02 +0000   Tue, 31 May 2022 17:59:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220531175602-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                a15f53f9-4c24-45d7-81a0-f7f59ad7b293
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-20220531175602-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-dfhrt                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      61s
	  kube-system                 kube-apiserver-newest-cni-20220531175602-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-newest-cni-20220531175602-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-44xvh                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-newest-cni-20220531175602-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 60s                kube-proxy  
	  Normal  Starting                 7s                 kube-proxy  
	  Normal  Starting                 74s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 14s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x7 over 13s)  kubelet     Node newest-cni-20220531175602-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [096e907429e1c8fe3793e3ef5f1669af5784010ee0fd39e557b3cf212ed29a14] <==
	* {"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:00:58.931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:00:58.933Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:00:58.934Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531175602-6903 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:01:00.524Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:01:00.525Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:01:00.525Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:01:00.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:01:00.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> etcd [6720e78c021575b83764fb4c30e2f1a621a9aa7b69f7f102c4b92a9693de111b] <==
	* {"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T17:59:51.403Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220531175602-6903 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T17:59:52.027Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:59:52.028Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.029Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T17:59:52.030Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  18:01:11 up  1:43,  0 users,  load average: 1.09, 1.03, 1.52
	Linux newest-cni-20220531175602-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2dfc683928eedfe4ae0a4edc7fe2d9dee7fe5bf3a1ae810be68844bfe3083914] <==
	* I0531 17:59:54.338548       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 17:59:54.338564       1 cache.go:39] Caches are synced for autoregister controller
	I0531 17:59:54.338682       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 17:59:54.402004       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 17:59:54.402332       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 17:59:55.237103       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 17:59:55.237130       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 17:59:55.242098       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0531 17:59:55.245112       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0531 17:59:55.245131       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0531 17:59:55.615274       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 17:59:55.645134       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 17:59:55.736778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 17:59:55.741300       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 17:59:55.742203       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 17:59:55.745338       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 17:59:56.369085       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 17:59:57.362792       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 17:59:57.371299       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 17:59:57.382886       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 17:59:57.533219       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:00:09.372858       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:00:10.222431       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:00:11.125409       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:00:11.523393       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.240.49]
	
	* 
	* ==> kube-apiserver [c585073fed2ff09bf2a22fb5caae163971fa5d47d1c2a4dd91a16f7fe77baa1e] <==
	* E0531 18:01:02.217704       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 18:01:02.301399       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0531 18:01:02.301401       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 18:01:02.301434       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:01:02.303637       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:01:02.303682       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:01:02.303641       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:01:02.304710       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0531 18:01:02.320890       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:01:03.088410       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:01:03.088435       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:01:03.093873       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0531 18:01:03.329756       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:01:03.329822       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:01:03.329831       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:01:04.327260       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:01:04.382910       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:01:04.464666       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:01:04.473472       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:01:04.507628       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:01:04.512703       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 18:01:05.383987       1 controller.go:611] quota admission added evaluator for: namespaces
	I0531 18:01:05.453414       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.102.255.125]
	I0531 18:01:05.463854       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.171.112]
	
	* 
	* ==> kube-controller-manager [8eda42b95092e17acd3bd85e2653ecd57f5994f68104164dd8f4512787e71620] <==
	* I0531 18:00:09.417121       1 shared_informer.go:247] Caches are synced for GC 
	I0531 18:00:09.435053       1 shared_informer.go:247] Caches are synced for node 
	I0531 18:00:09.435076       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 18:00:09.435081       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0531 18:00:09.435087       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0531 18:00:09.438911       1 range_allocator.go:374] Set node newest-cni-20220531175602-6903 PodCIDR to [192.168.0.0/24]
	I0531 18:00:09.461195       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0531 18:00:09.473856       1 shared_informer.go:247] Caches are synced for expand 
	I0531 18:00:09.477055       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:00:09.479165       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0531 18:00:09.504634       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 18:00:09.516907       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 18:00:09.518174       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 18:00:09.526157       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0531 18:00:09.747467       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 18:00:09.933062       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:00:09.981790       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 18:00:09.981829       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:00:10.175792       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-xx57r"
	I0531 18:00:10.182390       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-lv5sq"
	I0531 18:00:10.198396       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-xx57r"
	I0531 18:00:10.227411       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dfhrt"
	I0531 18:00:10.228692       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-44xvh"
	I0531 18:00:11.435224       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0531 18:00:11.440137       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-64wmz"
	
	* 
	* ==> kube-controller-manager [a75ee54116b3818460d131664635ff3fd9d57a25adc130516692fa394709cdf0] <==
	* I0531 18:01:05.955979       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0531 18:01:05.956205       1 controllermanager.go:605] Started "csrsigning"
	I0531 18:01:05.956301       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	I0531 18:01:05.956319       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0531 18:01:05.956373       1 dynamic_serving_content.go:131] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0531 18:01:05.957782       1 controllermanager.go:605] Started "ttl"
	I0531 18:01:05.957914       1 ttl_controller.go:121] Starting TTL controller
	I0531 18:01:05.957930       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I0531 18:01:05.959446       1 node_lifecycle_controller.go:377] Sending events to api server.
	I0531 18:01:05.959621       1 taint_manager.go:163] "Sending events to api server"
	I0531 18:01:05.959706       1 node_lifecycle_controller.go:505] Controller will reconcile labels.
	I0531 18:01:05.959739       1 controllermanager.go:605] Started "nodelifecycle"
	I0531 18:01:05.959837       1 node_lifecycle_controller.go:539] Starting node controller
	I0531 18:01:05.959854       1 shared_informer.go:240] Waiting for caches to sync for taint
	E0531 18:01:05.976183       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0531 18:01:05.976295       1 controllermanager.go:605] Started "namespace"
	I0531 18:01:05.976375       1 namespace_controller.go:200] Starting namespace controller
	I0531 18:01:05.976392       1 shared_informer.go:240] Waiting for caches to sync for namespace
	I0531 18:01:05.980995       1 garbagecollector.go:146] Starting garbage collector controller
	I0531 18:01:05.981015       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0531 18:01:05.981032       1 graph_builder.go:289] GraphBuilder running
	I0531 18:01:05.981142       1 controllermanager.go:605] Started "garbagecollector"
	I0531 18:01:05.982753       1 node_ipam_controller.go:91] Sending events to api server.
	W0531 18:01:06.010116       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0531 18:01:06.019170       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [8a8b463da962d2fe8ce361679f04bfb51fc7bda1ea9bd884502d0baeadbef622] <==
	* I0531 18:01:04.239527       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:01:04.239576       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:01:04.239612       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:01:04.323325       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:01:04.323362       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:01:04.323373       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:01:04.323391       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:01:04.324088       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:01:04.324661       1 config.go:317] "Starting service config controller"
	I0531 18:01:04.324697       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:01:04.324816       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:01:04.324871       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:01:04.425669       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:01:04.425694       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [e8f79a0b14e7be3fd9e9804546fc6c4840685f02361357b409458551853772e0] <==
	* I0531 18:00:11.057248       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0531 18:00:11.057321       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0531 18:00:11.057358       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:00:11.122456       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:00:11.122493       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:00:11.122501       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:00:11.122515       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:00:11.122916       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:00:11.123553       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:00:11.123571       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:00:11.123610       1 config.go:317] "Starting service config controller"
	I0531 18:00:11.123626       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:00:11.224155       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:00:11.224156       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c7007061d9990705414aa788d4bd89fc3051b5962e1daba8b7ce2346104acf7d] <==
	* W0531 18:00:59.005610       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0531 18:00:59.541543       1 serving.go:348] Generated self-signed cert in-memory
	W0531 18:01:02.130387       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 18:01:02.130633       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:01:02.130768       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 18:01:02.130859       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 18:01:02.210765       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0531 18:01:02.212554       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0531 18:01:02.212672       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 18:01:02.212681       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:01:02.212702       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0531 18:01:02.218510       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0531 18:01:02.218572       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0531 18:01:02.218682       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0531 18:01:02.218698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0531 18:01:02.218995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0531 18:01:02.219033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0531 18:01:02.221244       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E0531 18:01:02.221290       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	I0531 18:01:03.313191       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [f3a9c3a521d42c640e492b1c7a8417259eff9af450f759fe2bd1f37131b626a9] <==
	* W0531 17:59:54.325644       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:59:54.325915       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:59:54.325927       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:59:54.325988       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.326090       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:59:54.326341       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327312       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.327329       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:59:54.327350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:59:54.327353       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:59:54.327365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 17:59:54.327366       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:59:54.326452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:54.327383       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:59:54.327393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:54.327594       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:59:54.327849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:59:55.221179       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:55.221220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:55.247249       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:59:55.247278       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:59:55.426699       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:59:55.426740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0531 17:59:55.921497       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:00:32 UTC, end at Tue 2022-05-31 18:01:11 UTC. --
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.716229     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.816780     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:01 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:01.917202     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:02.017981     911 kubelet.go:2461] "Error getting node" err="node \"newest-cni-20220531175602-6903\" not found"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.118827     911 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.119678     911 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:02.120073     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.315441     911 kubelet_node_status.go:108] "Node was previously registered" node="newest-cni-20220531175602-6903"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.315571     911 kubelet_node_status.go:73] "Successfully registered node" node="newest-cni-20220531175602-6903"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.948092     911 apiserver.go:52] "Watching apiserver"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.951620     911 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:01:02 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:02.951888     911 topology_manager.go:200] "Topology Admit Handler"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:03.037456     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126802     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa6aeb24-8d4b-4960-99a0-65d0493743bf-kube-proxy\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126852     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa6aeb24-8d4b-4960-99a0-65d0493743bf-lib-modules\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.126978     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-cni-cfg\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127018     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9gl\" (UniqueName: \"kubernetes.io/projected/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-kube-api-access-mw9gl\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127051     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-lib-modules\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127081     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa6aeb24-8d4b-4960-99a0-65d0493743bf-xtables-lock\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127110     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07-xtables-lock\") pod \"kindnet-dfhrt\" (UID: \"d378aaec-f1d1-4d9f-a0c5-bd94abfb7c07\") " pod="kube-system/kindnet-dfhrt"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127164     911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snz6z\" (UniqueName: \"kubernetes.io/projected/aa6aeb24-8d4b-4960-99a0-65d0493743bf-kube-api-access-snz6z\") pod \"kube-proxy-44xvh\" (UID: \"aa6aeb24-8d4b-4960-99a0-65d0493743bf\") " pod="kube-system/kube-proxy-44xvh"
	May 31 18:01:03 newest-cni-20220531175602-6903 kubelet[911]: I0531 18:01:03.127202     911 reconciler.go:157] "Reconciler: start to sync state"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.037902     911 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.048119     911 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 31 18:01:08 newest-cni-20220531175602-6903 kubelet[911]: E0531 18:01:08.048163     911 helpers.go:673] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal=allocatableMemory.available
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220531175602-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner: exit status 1 (53.919265ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-lv5sq" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-64wmz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220531175602-6903 describe pod coredns-64897985d-lv5sq metrics-server-b955d9d8-64wmz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (542.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0531 18:06:38.906473    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 18:07:15.749878    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (9m0.286846667s)

                                                
                                                
-- stdout --
	* [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-20220531175323-6903" ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:06:31.856563  261225 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:06:31.856712  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856722  261225 out.go:309] Setting ErrFile to fd 2...
	I0531 18:06:31.856727  261225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:06:31.856832  261225 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:06:31.857034  261225 out.go:303] Setting JSON to false
	I0531 18:06:31.858042  261225 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6543,"bootTime":1654013849,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:06:31.858099  261225 start.go:125] virtualization: kvm guest
	I0531 18:06:31.860371  261225 out.go:177] * [no-preload-20220531175323-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:06:31.861722  261225 notify.go:193] Checking for updates...
	I0531 18:06:31.861741  261225 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:06:31.863130  261225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:06:31.864624  261225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:31.865934  261225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:06:31.867316  261225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:06:31.868940  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:31.869397  261225 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:06:31.907400  261225 docker.go:137] docker version: linux-20.10.16
	I0531 18:06:31.907473  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.005401  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:31.935579157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.005490  261225 docker.go:254] overlay module found
	I0531 18:06:32.008184  261225 out.go:177] * Using the docker driver based on existing profile
	I0531 18:06:32.009506  261225 start.go:284] selected driver: docker
	I0531 18:06:32.009519  261225 start.go:806] validating driver "docker" against &{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTime
out:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.009608  261225 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:06:32.010442  261225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:06:32.108530  261225 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:06:32.039046549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:06:32.108794  261225 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:06:32.108817  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:32.108827  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:32.108849  261225 start_flags.go:306] config:
	{Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:32.111037  261225 out.go:177] * Starting control plane node no-preload-20220531175323-6903 in cluster no-preload-20220531175323-6903
	I0531 18:06:32.112380  261225 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:06:32.113769  261225 out.go:177] * Pulling base image ...
	I0531 18:06:32.115200  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:32.115228  261225 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:06:32.115343  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.115478  261225 cache.go:107] acquiring lock: {Name:mke7c3123bbb887802876b6038e785eff1d65578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115516  261225 cache.go:107] acquiring lock: {Name:mkccfd735c16da1ed9ea4fc459feb477365b33a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115520  261225 cache.go:107] acquiring lock: {Name:mk598b9f501113e758a5b1053c8a9a41e87e7c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115517  261225 cache.go:107] acquiring lock: {Name:mk92196aa514c10ef84dd2326a35399f7c3719a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115545  261225 cache.go:107] acquiring lock: {Name:mk59854aac2611f794ffa59524077b81afbc7de4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115552  261225 cache.go:107] acquiring lock: {Name:mk37d69d4525de4b98ff3597b4269e1680132b96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115480  261225 cache.go:107] acquiring lock: {Name:mka8d6fd8013f251c85f4bca8a18522e173be81e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115558  261225 cache.go:107] acquiring lock: {Name:mk4a95c9ed8757a79d1e9fa1e44efcaead7631e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.115785  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 18:06:32.115815  261225 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 348.663µs
	I0531 18:06:32.115829  261225 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 18:06:32.115875  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 exists
	I0531 18:06:32.115877  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 exists
	I0531 18:06:32.115899  261225 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6" took 392.805µs
	I0531 18:06:32.115911  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0531 18:06:32.115912  261225 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 succeeded
	I0531 18:06:32.115913  261225 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6" took 404.132µs
	I0531 18:06:32.115930  261225 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 399.123µs
	I0531 18:06:32.115947  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 exists
	I0531 18:06:32.115972  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 exists
	I0531 18:06:32.115973  261225 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6" took 444.025µs
	I0531 18:06:32.115992  261225 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6" took 526.55µs
	I0531 18:06:32.115932  261225 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 succeeded
	I0531 18:06:32.115998  261225 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 succeeded
	I0531 18:06:32.116024  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0531 18:06:32.115948  261225 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0531 18:06:32.115887  261225 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0531 18:06:32.116038  261225 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 484.283µs
	I0531 18:06:32.116056  261225 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0531 18:06:32.116054  261225 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 533.964µs
	I0531 18:06:32.116074  261225 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0531 18:06:32.116007  261225 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 succeeded
	I0531 18:06:32.116089  261225 cache.go:87] Successfully saved all images to host disk.
	I0531 18:06:32.161016  261225 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:06:32.161038  261225 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:06:32.161053  261225 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:06:32.161092  261225 start.go:352] acquiring machines lock for no-preload-20220531175323-6903: {Name:mk8635283b759be2fcd7aacbafc64b0c778ff5b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:06:32.161181  261225 start.go:356] acquired machines lock for "no-preload-20220531175323-6903" in 68.368µs
	I0531 18:06:32.161203  261225 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:06:32.161208  261225 fix.go:55] fixHost starting: 
	I0531 18:06:32.161424  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.191567  261225 fix.go:103] recreateIfNeeded on no-preload-20220531175323-6903: state=Stopped err=<nil>
	W0531 18:06:32.191592  261225 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:06:32.194700  261225 out.go:177] * Restarting existing docker container for "no-preload-20220531175323-6903" ...
	I0531 18:06:32.196063  261225 cli_runner.go:164] Run: docker start no-preload-20220531175323-6903
	I0531 18:06:32.572533  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:06:32.606201  261225 kic.go:416] container "no-preload-20220531175323-6903" state is running.
	I0531 18:06:32.606544  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:32.637813  261225 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/config.json ...
	I0531 18:06:32.637995  261225 machine.go:88] provisioning docker machine ...
	I0531 18:06:32.638016  261225 ubuntu.go:169] provisioning hostname "no-preload-20220531175323-6903"
	I0531 18:06:32.638050  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:32.668506  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:32.668682  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:32.668704  261225 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220531175323-6903 && echo "no-preload-20220531175323-6903" | sudo tee /etc/hostname
	I0531 18:06:32.669243  261225 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46970->127.0.0.1:49432: read: connection reset by peer
	I0531 18:06:35.786250  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220531175323-6903
	
	I0531 18:06:35.786326  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:35.821236  261225 main.go:134] libmachine: Using SSH client type: native
	I0531 18:06:35.821365  261225 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0531 18:06:35.821383  261225 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220531175323-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220531175323-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220531175323-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:06:35.934343  261225 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:06:35.934366  261225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:06:35.934410  261225 ubuntu.go:177] setting up certificates
	I0531 18:06:35.934428  261225 provision.go:83] configureAuth start
	I0531 18:06:35.934476  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:35.965223  261225 provision.go:138] copyHostCerts
	I0531 18:06:35.965272  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:06:35.965282  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:06:35.965344  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:06:35.965427  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:06:35.965439  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:06:35.965462  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:06:35.965511  261225 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:06:35.965519  261225 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:06:35.965539  261225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:06:35.965578  261225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220531175323-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220531175323-6903]
	I0531 18:06:36.057355  261225 provision.go:172] copyRemoteCerts
	I0531 18:06:36.057402  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:06:36.057430  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.089999  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.169898  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0531 18:06:36.186339  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:06:36.202145  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:06:36.217945  261225 provision.go:86] duration metric: configureAuth took 283.507566ms
	I0531 18:06:36.217967  261225 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:06:36.218141  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:06:36.218159  261225 machine.go:91] provisioned docker machine in 3.58014978s
	I0531 18:06:36.218168  261225 start.go:306] post-start starting for "no-preload-20220531175323-6903" (driver="docker")
	I0531 18:06:36.218179  261225 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:06:36.218216  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:06:36.218249  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.250462  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.329903  261225 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:06:36.332443  261225 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:06:36.332472  261225 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:06:36.332481  261225 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:06:36.332487  261225 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:06:36.332499  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:06:36.332539  261225 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:06:36.332602  261225 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:06:36.332675  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:06:36.338862  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:36.355254  261225 start.go:309] post-start completed in 137.071829ms
	I0531 18:06:36.355304  261225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:06:36.355336  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.386735  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.467076  261225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:06:36.470822  261225 fix.go:57] fixHost completed within 4.309609112s
	I0531 18:06:36.470844  261225 start.go:81] releasing machines lock for "no-preload-20220531175323-6903", held for 4.309648254s
	I0531 18:06:36.470905  261225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220531175323-6903
	I0531 18:06:36.502427  261225 ssh_runner.go:195] Run: systemctl --version
	I0531 18:06:36.502473  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.502475  261225 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:06:36.502528  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:06:36.537057  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.539320  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:06:36.638776  261225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:06:36.649832  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:06:36.658496  261225 docker.go:187] disabling docker service ...
	I0531 18:06:36.658539  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:06:36.667272  261225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:06:36.675216  261225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:06:36.752203  261225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:06:36.818959  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:06:36.827401  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:06:36.839221  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:06:36.851589  261225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:06:36.857335  261225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:06:36.865201  261225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:06:36.934383  261225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:06:37.001672  261225 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:06:37.001743  261225 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:06:37.005089  261225 start.go:468] Will wait 60s for crictl version
	I0531 18:06:37.005161  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:37.030007  261225 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:06:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:06:48.077720  261225 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:06:48.100248  261225 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:06:48.100298  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.127707  261225 ssh_runner.go:195] Run: containerd --version
	I0531 18:06:48.157240  261225 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:06:48.158764  261225 cli_runner.go:164] Run: docker network inspect no-preload-20220531175323-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:06:48.189984  261225 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 18:06:48.193238  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.203917  261225 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:06:48.205236  261225 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:06:48.205283  261225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:06:48.227240  261225 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:06:48.227263  261225 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:06:48.227305  261225 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:06:48.249494  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:06:48.249514  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:06:48.249533  261225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:06:48.249549  261225 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220531175323-6903 NodeName:no-preload-20220531175323-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:06:48.249720  261225 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220531175323-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:06:48.249812  261225 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220531175323-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:06:48.249865  261225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:06:48.256345  261225 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:06:48.256398  261225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:06:48.262969  261225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (575 bytes)
	I0531 18:06:48.274664  261225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:06:48.287040  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2059 bytes)
	I0531 18:06:48.299091  261225 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:06:48.301889  261225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:06:48.310656  261225 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903 for IP: 192.168.67.2
	I0531 18:06:48.310742  261225 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:06:48.310777  261225 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:06:48.310834  261225 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/client.key
	I0531 18:06:48.310884  261225 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key.c7fa3a9e
	I0531 18:06:48.310918  261225 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key
	I0531 18:06:48.310996  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:06:48.311025  261225 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:06:48.311034  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:06:48.311059  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:06:48.311084  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:06:48.311106  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:06:48.311181  261225 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:06:48.311875  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:06:48.328351  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:06:48.344708  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:06:48.361384  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/no-preload-20220531175323-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:06:48.377622  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:06:48.393772  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:06:48.409607  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:06:48.425962  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:06:48.441752  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:06:48.457422  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:06:48.473322  261225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:06:48.489365  261225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:06:48.501512  261225 ssh_runner.go:195] Run: openssl version
	I0531 18:06:48.505937  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:06:48.512677  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515513  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.515567  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:06:48.520028  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:06:48.526318  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:06:48.533197  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536004  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.536048  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:06:48.540484  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:06:48.546655  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:06:48.553433  261225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556334  261225 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.556368  261225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:06:48.560699  261225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:06:48.566833  261225 kubeadm.go:395] StartCluster: {Name:no-preload-20220531175323-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:no-preload-20220531175323-6903 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:06:48.566936  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:06:48.566963  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:48.590607  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:48.590629  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:48.590640  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:48.590651  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:48.590665  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:48.590677  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:48.590684  261225 cri.go:87] found id: ""
	I0531 18:06:48.590707  261225 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:06:48.601948  261225 cri.go:114] JSON = null
	W0531 18:06:48.601985  261225 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:06:48.602021  261225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:06:48.608119  261225 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:06:48.608137  261225 kubeadm.go:626] restartCluster start
	I0531 18:06:48.608162  261225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:06:48.613826  261225 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.614554  261225 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220531175323-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:06:48.615039  261225 kubeconfig.go:127] "no-preload-20220531175323-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:06:48.615784  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:06:48.617278  261225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:06:48.623232  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.623290  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.630395  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:48.830763  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:48.830820  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:48.838930  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.031184  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.031241  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.039494  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.230727  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.230797  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.239312  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.430567  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.430662  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.438967  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.631308  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.631386  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.640008  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:49.831278  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:49.831352  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:49.839490  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.030797  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.030869  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.039659  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.230992  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.231065  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.239370  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.430595  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.430703  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.438937  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.631190  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.631256  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.639827  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:50.831099  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:50.831190  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:50.839564  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.030836  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.030912  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.039250  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.230475  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.230546  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.238738  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.431028  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.431083  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.439535  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.630862  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.630914  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.639047  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.639064  261225 api_server.go:165] Checking apiserver status ...
	I0531 18:06:51.639103  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:06:51.646503  261225 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.646523  261225 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:06:51.646531  261225 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:06:51.646545  261225 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:06:51.646589  261225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:06:51.669569  261225 cri.go:87] found id: "ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516"
	I0531 18:06:51.669588  261225 cri.go:87] found id: "b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e"
	I0531 18:06:51.669595  261225 cri.go:87] found id: "91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b"
	I0531 18:06:51.669601  261225 cri.go:87] found id: "2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d"
	I0531 18:06:51.669608  261225 cri.go:87] found id: "0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509"
	I0531 18:06:51.669617  261225 cri.go:87] found id: "c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66"
	I0531 18:06:51.669633  261225 cri.go:87] found id: ""
	I0531 18:06:51.669640  261225 cri.go:232] Stopping containers: [ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66]
	I0531 18:06:51.669675  261225 ssh_runner.go:195] Run: which crictl
	I0531 18:06:51.672277  261225 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop ef96bc146e16f53bd22db7d399b0ad4a6fd599a8158014528cebb9ce69a69516 b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e 91afec248cd268a7c62bf50f2e20abcb5a4f7d7673f6535ff34c54a1e9de197b 2d2cb82735b88ed0e2a2b55750ef118f20a27151649abeb88b3834172d40838d 0d1755990bfb1499ea63384e8882cd8a1300d9c62a324a9a132dc9c9c48fa509 c25ff47b27774be463d42d919a35176e1f2eeed5a385db33fef07839ffc60f66
	I0531 18:06:51.696665  261225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:06:51.706131  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:06:51.712590  261225 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 31 17:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:53 /etc/kubernetes/scheduler.conf
	
	I0531 18:06:51.712632  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:06:51.718730  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:06:51.724887  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.731013  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.731060  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:06:51.737056  261225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:06:51.743102  261225 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:06:51.743164  261225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:06:51.748937  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755338  261225 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:06:51.755353  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:51.795954  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.528000  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.654713  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.709489  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:06:52.750049  261225 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:06:52.750109  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.257876  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:53.757835  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.257770  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:54.757793  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.258138  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:55.757795  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.258203  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:56.758036  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.257882  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:57.757890  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.258306  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.758044  261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:06:58.810957  261225 api_server.go:71] duration metric: took 6.060906737s to wait for apiserver process to appear ...
	I0531 18:06:58.810993  261225 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:06:58.811006  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:06:58.811421  261225 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0531 18:06:59.312100  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.519859  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:07:01.519904  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:07:01.812506  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:01.816767  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:01.816787  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.312284  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.316938  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:07:02.316963  261225 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:07:02.812304  261225 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 18:07:02.817359  261225 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 18:07:02.822648  261225 api_server.go:140] control plane version: v1.23.6
	I0531 18:07:02.822669  261225 api_server.go:130] duration metric: took 4.011670774s to wait for apiserver health ...
	I0531 18:07:02.822682  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:07:02.822688  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:07:02.825359  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:07:02.826864  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:07:02.830365  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:07:02.830389  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:07:02.844337  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:07:03.565042  261225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:07:03.571091  261225 system_pods.go:59] 9 kube-system pods found
	I0531 18:07:03.571119  261225 system_pods.go:61] "coredns-64897985d-8cptk" [b7548080-9210-497c-9a72-e3d0dc790731] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571127  261225 system_pods.go:61] "etcd-no-preload-20220531175323-6903" [0c3833e1-4748-46be-b9f9-ba9743784100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:07:03.571136  261225 system_pods.go:61] "kindnet-n856k" [1bf232e0-3302-4413-8693-378d7bcc2bad] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:07:03.571183  261225 system_pods.go:61] "kube-apiserver-no-preload-20220531175323-6903" [a04b08e1-09a2-4700-97ef-1d46decd0195] Running
	I0531 18:07:03.571194  261225 system_pods.go:61] "kube-controller-manager-no-preload-20220531175323-6903" [fc4e03c4-6dfa-492c-b27f-80c7dde0de7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:07:03.571207  261225 system_pods.go:61] "kube-proxy-8szbz" [e7e66d9f-358e-4d5f-b12d-541da7f43741] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:07:03.571216  261225 system_pods.go:61] "kube-scheduler-no-preload-20220531175323-6903" [5399c2c9-e9f9-4208-9bd3-f922cc3f4f6b] Running
	I0531 18:07:03.571224  261225 system_pods.go:61] "metrics-server-b955d9d8-bsgtk" [5c43931e-ba07-4e57-b438-73e230ac2391] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571230  261225 system_pods.go:61] "storage-provisioner" [a98841d0-cbd8-464c-b5bc-542abbaf8a0b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:07:03.571237  261225 system_pods.go:74] duration metric: took 6.174332ms to wait for pod list to return data ...
	I0531 18:07:03.571248  261225 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:07:03.573670  261225 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:07:03.573690  261225 node_conditions.go:123] node cpu capacity is 8
	I0531 18:07:03.573700  261225 node_conditions.go:105] duration metric: took 2.442916ms to run NodePressure ...
	I0531 18:07:03.573714  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:07:03.691657  261225 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695473  261225 kubeadm.go:777] kubelet initialised
	I0531 18:07:03.695496  261225 kubeadm.go:778] duration metric: took 3.812908ms waiting for restarted kubelet to initialise ...
	I0531 18:07:03.695502  261225 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:07:03.699872  261225 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	I0531 18:07:05.705225  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:08.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:10.205511  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:12.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:15.204717  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:17.204780  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:19.205209  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:21.205381  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:23.704908  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:26.205961  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:28.705082  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:31.205047  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:33.205742  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:35.705103  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:38.205545  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:40.206261  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:42.704687  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:44.705052  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:47.205179  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:49.205593  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:51.704646  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:53.704749  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:55.705572  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:07:57.706156  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:00.204705  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:02.205219  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:04.704880  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:06.705169  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:08.706025  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:11.205176  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:13.705063  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:16.205156  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:18.704597  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:21.205707  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:23.704978  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:25.705116  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:27.705434  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:30.205110  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:32.704857  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:34.705759  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:37.206138  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:39.207436  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:41.704753  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.204782  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:46.204845  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:48.205463  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:50.705334  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:52.705874  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:55.205337  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:57.704872  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:00.205465  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:02.205884  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:04.705122  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	* 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-20220531175323-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531175323-6903
helpers_test.go:235: (dbg) docker inspect no-preload-20220531175323-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d",
	        "Created": "2022-05-31T17:53:25.199469079Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:06:32.565667778Z",
	            "FinishedAt": "2022-05-31T18:06:31.347829206Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d-json.log",
	        "Name": "/no-preload-20220531175323-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531175323-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531175323-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531175323-6903",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531175323-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531175323-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65d77ba8a692af3c9abf23c596fe50443fb99421003d3dd566b15d4ac739a15f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65d77ba8a692",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531175323-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4f33d13fefc",
	                        "no-preload-20220531175323-6903"
	                    ],
	                    "NetworkID": "b2391a84ebd8e16dd2e9aca80777d6d03045cffc9cfc8290f45a61a1473c3244",
	                    "EndpointID": "2d286acc05ba36111035d982d1c124c6d8d7725e9ab99431bd3a13dd88d7ed81",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220531175323-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p                                   | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	de47473beb36b       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   ada1687a9236a
	3dde6d2f94876       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   ada1687a9236a
	46ae8b49a2f40       4c03754524064       4 minutes ago        Running             kube-proxy                0                   17a4d7d0aae07
	19553e3109d01       595f327f224a4       4 minutes ago        Running             kube-scheduler            2                   acca4113a0648
	da2122c0c30c1       8fa62c12256df       4 minutes ago        Running             kube-apiserver            2                   d6e71d2677426
	434c691688029       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   2                   9cfc7f577b371
	26844adc7521e       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   63e1a77a97e58
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:06:32 UTC, end at Tue 2022-05-31 18:15:33 UTC. --
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.410542322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.410578026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.411601456Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17a4d7d0aae077808b7d7413fd30a896158a9ac332927302b40028cc9cddded6 pid=3316 runtime=io.containerd.runc.v2
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.630199879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m75cf,Uid:34500e7d-a542-4026-a0da-4a45c2437da8,Namespace:kube-system,Attempt:0,} returns sandbox id \"17a4d7d0aae077808b7d7413fd30a896158a9ac332927302b40028cc9cddded6\""
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.704329695Z" level=info msg="CreateContainer within sandbox \"17a4d7d0aae077808b7d7413fd30a896158a9ac332927302b40028cc9cddded6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.723306022Z" level=info msg="CreateContainer within sandbox \"17a4d7d0aae077808b7d7413fd30a896158a9ac332927302b40028cc9cddded6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46ae8b49a2f40a2cfbd705f82fa54f8df0a59683b743d77c8ded4297a54aca3e\""
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.727553195Z" level=info msg="StartContainer for \"46ae8b49a2f40a2cfbd705f82fa54f8df0a59683b743d77c8ded4297a54aca3e\""
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.822564641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-s4rf7,Uid:478a0044-cd97-4cf0-9805-be336cddfb83,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\""
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.825347464Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.909886927Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff\""
	May 31 18:11:32 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:32.911082313Z" level=info msg="StartContainer for \"3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff\""
	May 31 18:11:33 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:33.021347297Z" level=info msg="StartContainer for \"46ae8b49a2f40a2cfbd705f82fa54f8df0a59683b743d77c8ded4297a54aca3e\" returns successfully"
	May 31 18:11:33 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:11:33.231382685Z" level=info msg="StartContainer for \"3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff\" returns successfully"
	May 31 18:12:23 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:12:23.624456333Z" level=error msg="ContainerStatus for \"0c78a0612e29d3181f50087ba49d9322964a29e1d8c90fb327cab519e00528fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c78a0612e29d3181f50087ba49d9322964a29e1d8c90fb327cab519e00528fc\": not found"
	May 31 18:12:23 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:12:23.625045868Z" level=error msg="ContainerStatus for \"7632464849022e9d924dfc9f0f5a6382b8b9ea86b88dc8d598ee262c1b57d2ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7632464849022e9d924dfc9f0f5a6382b8b9ea86b88dc8d598ee262c1b57d2ee\": not found"
	May 31 18:12:23 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:12:23.625491736Z" level=error msg="ContainerStatus for \"b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b12fda9e12e52b9b6840ef918aaee3df78b8e2bdde2221a7eda67bc20347b76e\": not found"
	May 31 18:12:23 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:12:23.625901745Z" level=error msg="ContainerStatus for \"bc6e6cd52c236cf531491d5153a45ae88eba64d891a6b550c6b16e0fd41d4cff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc6e6cd52c236cf531491d5153a45ae88eba64d891a6b550c6b16e0fd41d4cff\": not found"
	May 31 18:14:13 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:13.547700060Z" level=info msg="shim disconnected" id=3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff
	May 31 18:14:13 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:13.547755956Z" level=warning msg="cleaning up after shim disconnected" id=3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff namespace=k8s.io
	May 31 18:14:13 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:13.547767213Z" level=info msg="cleaning up dead shim"
	May 31 18:14:13 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:13.556676231Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:14:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3734 runtime=io.containerd.runc.v2\n"
	May 31 18:14:14 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:14.083726890Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 18:14:14 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:14.094869986Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"de47473beb36b8f765ea3845c0b7e422906b2af82381f3a5778f4beeeba0c624\""
	May 31 18:14:14 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:14.095364049Z" level=info msg="StartContainer for \"de47473beb36b8f765ea3845c0b7e422906b2af82381f3a5778f4beeeba0c624\""
	May 31 18:14:14 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:14:14.305932331Z" level=info msg="StartContainer for \"de47473beb36b8f765ea3845c0b7e422906b2af82381f3a5778f4beeeba0c624\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531175323-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531175323-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531175323-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:11:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531175323-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:15:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:11:31 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:11:31 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:11:31 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:11:31 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220531175323-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                3f650030-6900-444d-b03b-802678a62df1
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220531175323-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-s4rf7                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m1s
	  kube-system                 kube-apiserver-no-preload-20220531175323-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-no-preload-20220531175323-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-m75cf                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 kube-scheduler-no-preload-20220531175323-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m59s  kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [26844adc7521e3998a8fd7eb5959acfe71aef6577d68e710c3fc6d6d97fe5939] <==
	* {"level":"info","ts":"2022-05-31T18:11:13.009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-05-31T18:11:13.010Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220531175323-6903 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.704Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-05-31T18:11:13.704Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:15:33 up  1:58,  0 users,  load average: 0.50, 0.58, 0.90
	Linux no-preload-20220531175323-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [da2122c0c30c19a146de1126066a9662a3593887fda1084cb52b23bd621aedac] <==
	* I0531 18:11:16.832351       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:11:16.835603       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:11:17.513116       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:11:18.522488       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:11:18.528475       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:11:18.536741       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:11:23.708397       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:11:31.955772       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:11:32.005652       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:11:33.404569       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:11:33.507609       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.97.46.195]
	I0531 18:11:34.240166       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.214.108]
	W0531 18:11:34.304896       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:11:34.304972       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:11:34.304985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:11:34.306645       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.112.126]
	W0531 18:12:34.305506       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:12:34.305579       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:12:34.305595       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:14:34.306512       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:14:34.306585       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:14:34.306598       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [434c691688029a16594dcace8e5cd18542a4229065076549eda61aee4dd3471c] <==
	* E0531 18:11:34.139814       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 18:11:34.141666       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:11:34.141822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:11:34.142520       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:11:34.142552       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:11:34.147644       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:11:34.147656       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:11:34.205085       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-269mb"
	I0531 18:11:34.208231       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-cnl68"
	E0531 18:12:01.327790       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:12:01.742098       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:12:31.345726       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:12:31.757121       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:13:01.363891       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:13:01.771559       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:13:31.381981       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:13:31.784650       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:14:01.398074       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:14:01.799463       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:14:31.414834       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:14:31.814292       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:01.429153       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:01.828558       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:31.443752       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:31.842032       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [46ae8b49a2f40a2cfbd705f82fa54f8df0a59683b743d77c8ded4297a54aca3e] <==
	* I0531 18:11:33.203064       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 18:11:33.203207       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 18:11:33.203901       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:11:33.318321       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:11:33.318358       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:11:33.318369       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:11:33.318386       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:11:33.318744       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:11:33.319314       1 config.go:317] "Starting service config controller"
	I0531 18:11:33.319353       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:11:33.319317       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:11:33.319439       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:11:33.419858       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:11:33.420033       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [19553e3109d01af350a34965aa8b487908f950b1367f8f44363976e2b121b2d5] <==
	* W0531 18:11:15.420123       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:11:15.420191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:11:15.420143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:11:15.420094       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.419916       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.420215       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:11:15.420242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:11:15.420242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.420641       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:11:15.420679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:11:15.420701       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:15.421027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.318206       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.318242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.395823       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.395882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.428498       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:11:16.428534       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:11:16.482737       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.482776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.485585       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.485613       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.630989       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:11:16.631027       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:11:19.315480       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:06:32 UTC, end at Tue 2022-05-31 18:15:33 UTC. --
	May 31 18:13:33 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:33.926928    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:13:38 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:38.927898    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:13:43 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:43.929049    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:13:48 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:48.930309    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:13:53 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:53.931698    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:13:58 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:13:58.933180    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:03 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:03.934042    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:08 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:08.934733    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:13 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:13.935625    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:14 no-preload-20220531175323-6903 kubelet[2847]: I0531 18:14:14.081580    2847 scope.go:110] "RemoveContainer" containerID="3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff"
	May 31 18:14:18 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:18.936566    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:23 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:23.938172    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:28 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:28.939030    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:33 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:33.939950    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:38 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:38.940893    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:43 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:43.941869    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:48 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:48.942623    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:53 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:53.944092    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:14:58 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:14:58.945207    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:03 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:03.946632    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:08 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:08.947996    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:13 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:13.949560    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:18 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:18.951113    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:23 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:23.951969    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:15:28 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:15:28.953293    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb: exit status 1 (55.334179ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-r6lzx" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-nfwnt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-cnl68" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-269mb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (542.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (9m1.294821455s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220531175509-6903" ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:08:08.309660  265084 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:08:08.309791  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309803  265084 out.go:309] Setting ErrFile to fd 2...
	I0531 18:08:08.309815  265084 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:08:08.309926  265084 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:08:08.310162  265084 out.go:303] Setting JSON to false
	I0531 18:08:08.311302  265084 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6639,"bootTime":1654013849,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:08:08.311358  265084 start.go:125] virtualization: kvm guest
	I0531 18:08:08.313832  265084 out.go:177] * [default-k8s-different-port-20220531175509-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:08:08.315362  265084 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:08:08.315369  265084 notify.go:193] Checking for updates...
	I0531 18:08:08.316763  265084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:08:08.318244  265084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:08.319779  265084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:08:08.321340  265084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:08:08.323191  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:08.323603  265084 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:08:08.362346  265084 docker.go:137] docker version: linux-20.10.16
	I0531 18:08:08.362439  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.463602  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.390259074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.463698  265084 docker.go:254] overlay module found
	I0531 18:08:08.465780  265084 out.go:177] * Using the docker driver based on existing profile
	I0531 18:08:08.467039  265084 start.go:284] selected driver: docker
	I0531 18:08:08.467049  265084 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-
20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.467161  265084 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:08:08.468025  265084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:08:08.564858  265084 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:08:08.495990048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:08:08.565102  265084 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:08:08.565123  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:08.565130  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:08.565142  265084 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:08.567536  265084 out.go:177] * Starting control plane node default-k8s-different-port-20220531175509-6903 in cluster default-k8s-different-port-20220531175509-6903
	I0531 18:08:08.568944  265084 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:08:08.570365  265084 out.go:177] * Pulling base image ...
	I0531 18:08:08.571649  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:08.571672  265084 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:08:08.571689  265084 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:08:08.571699  265084 cache.go:57] Caching tarball of preloaded images
	I0531 18:08:08.571897  265084 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:08:08.571914  265084 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:08:08.572029  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:08.619058  265084 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:08:08.619084  265084 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:08:08.619096  265084 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:08:08.619126  265084 start.go:352] acquiring machines lock for default-k8s-different-port-20220531175509-6903: {Name:mk53f02aa9701786e51ee0c8a5d73dcf46801d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:08:08.619250  265084 start.go:356] acquired machines lock for "default-k8s-different-port-20220531175509-6903" in 60.577µs
	I0531 18:08:08.619274  265084 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:08:08.619282  265084 fix.go:55] fixHost starting: 
	I0531 18:08:08.619518  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:08.649852  265084 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220531175509-6903: state=Stopped err=<nil>
	W0531 18:08:08.649892  265084 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:08:08.651929  265084 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220531175509-6903" ...
	I0531 18:08:08.653246  265084 cli_runner.go:164] Run: docker start default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.036886  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:08:09.070303  265084 kic.go:416] container "default-k8s-different-port-20220531175509-6903" state is running.
	I0531 18:08:09.070670  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.103605  265084 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/config.json ...
	I0531 18:08:09.103829  265084 machine.go:88] provisioning docker machine ...
	I0531 18:08:09.103858  265084 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220531175509-6903"
	I0531 18:08:09.103909  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:09.134428  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:09.134578  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:09.134603  265084 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220531175509-6903 && echo "default-k8s-different-port-20220531175509-6903" | sudo tee /etc/hostname
	I0531 18:08:09.135241  265084 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37084->127.0.0.1:49437: read: connection reset by peer
	I0531 18:08:12.259673  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220531175509-6903
	
	I0531 18:08:12.259750  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.291506  265084 main.go:134] libmachine: Using SSH client type: native
	I0531 18:08:12.291664  265084 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0531 18:08:12.291697  265084 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220531175509-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220531175509-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220531175509-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:08:12.398559  265084 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:08:12.398585  265084 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:08:12.398600  265084 ubuntu.go:177] setting up certificates
	I0531 18:08:12.398609  265084 provision.go:83] configureAuth start
	I0531 18:08:12.398666  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.431013  265084 provision.go:138] copyHostCerts
	I0531 18:08:12.431073  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:08:12.431088  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:08:12.431178  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:08:12.431291  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:08:12.431308  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:08:12.431354  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:08:12.431426  265084 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:08:12.431439  265084 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:08:12.431471  265084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:08:12.431572  265084 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220531175509-6903 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220531175509-6903]
	I0531 18:08:12.598055  265084 provision.go:172] copyRemoteCerts
	I0531 18:08:12.598106  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:08:12.598136  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.631111  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.714018  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0531 18:08:12.731288  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:08:12.747333  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:08:12.763254  265084 provision.go:86] duration metric: configureAuth took 364.63384ms
	I0531 18:08:12.763282  265084 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:08:12.763474  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:08:12.763490  265084 machine.go:91] provisioned docker machine in 3.659644302s
	I0531 18:08:12.763497  265084 start.go:306] post-start starting for "default-k8s-different-port-20220531175509-6903" (driver="docker")
	I0531 18:08:12.763505  265084 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:08:12.763543  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:08:12.763579  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.795235  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:12.873714  265084 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:08:12.876227  265084 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:08:12.876248  265084 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:08:12.876257  265084 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:08:12.876262  265084 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:08:12.876270  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:08:12.876309  265084 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:08:12.876369  265084 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:08:12.876457  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:08:12.882555  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:12.898406  265084 start.go:309] post-start completed in 134.899493ms
	I0531 18:08:12.898470  265084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:08:12.898502  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:12.929840  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.011134  265084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:08:13.014770  265084 fix.go:57] fixHost completed within 4.39548261s
	I0531 18:08:13.014795  265084 start.go:81] releasing machines lock for "default-k8s-different-port-20220531175509-6903", held for 4.3955315s
	I0531 18:08:13.014869  265084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046127  265084 ssh_runner.go:195] Run: systemctl --version
	I0531 18:08:13.046172  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.046174  265084 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:08:13.046264  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:08:13.079038  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.079600  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:08:13.163089  265084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:08:13.184388  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:08:13.192965  265084 docker.go:187] disabling docker service ...
	I0531 18:08:13.193006  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:08:13.201843  265084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:08:13.209984  265084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:08:13.281373  265084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:08:13.351161  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:08:13.359679  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:08:13.371415  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:08:13.383601  265084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:08:13.389381  265084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:08:13.395293  265084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:08:13.467306  265084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:08:13.544767  265084 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:08:13.544838  265084 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:08:13.549043  265084 start.go:468] Will wait 60s for crictl version
	I0531 18:08:13.549097  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:13.581186  265084 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:08:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:08:24.627975  265084 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:08:24.650848  265084 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:08:24.650905  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.677319  265084 ssh_runner.go:195] Run: containerd --version
	I0531 18:08:24.704802  265084 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:08:24.706277  265084 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220531175509-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:08:24.735854  265084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0531 18:08:24.738892  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.749702  265084 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:08:24.751112  265084 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:08:24.751189  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.773113  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.773129  265084 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:08:24.773160  265084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:08:24.794357  265084 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:08:24.794373  265084 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:08:24.794406  265084 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:08:24.815845  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:24.815862  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:24.815876  265084 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:08:24.815892  265084 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220531175509-6903 NodeName:default-k8s-different-port-20220531175509-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:08:24.816032  265084 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220531175509-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:08:24.816118  265084 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220531175509-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0531 18:08:24.816165  265084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:08:24.822458  265084 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:08:24.822505  265084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:08:24.828560  265084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (591 bytes)
	I0531 18:08:24.840392  265084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:08:24.851809  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0531 18:08:24.863569  265084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:08:24.866080  265084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:08:24.875701  265084 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903 for IP: 192.168.76.2
	I0531 18:08:24.875793  265084 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:08:24.875829  265084 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:08:24.875892  265084 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/client.key
	I0531 18:08:24.875942  265084 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key.31bdca25
	I0531 18:08:24.875977  265084 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key
	I0531 18:08:24.876064  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:08:24.876092  265084 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:08:24.876104  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:08:24.876131  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:08:24.876152  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:08:24.876182  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:08:24.876220  265084 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:08:24.876773  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:08:24.892395  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:08:24.907892  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:08:24.923592  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/default-k8s-different-port-20220531175509-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:08:24.939375  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:08:24.954761  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:08:24.970309  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:08:24.985770  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:08:25.002079  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:08:25.017430  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:08:25.032835  265084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:08:25.048607  265084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:08:25.059993  265084 ssh_runner.go:195] Run: openssl version
	I0531 18:08:25.064220  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:08:25.070801  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073567  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.073614  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:08:25.077984  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:08:25.084035  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:08:25.090628  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093313  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.093361  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:08:25.097725  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:08:25.103766  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:08:25.110369  265084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113149  265084 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.113180  265084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:08:25.117580  265084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:08:25.123696  265084 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220531175509-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220531175509-6903
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:08:25.123784  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:08:25.123821  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:25.146591  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:25.146619  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:25.146630  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:25.146638  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:25.146644  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:25.146653  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:25.146661  265084 cri.go:87] found id: ""
	I0531 18:08:25.146697  265084 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:08:25.157547  265084 cri.go:114] JSON = null
	W0531 18:08:25.157585  265084 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:08:25.157630  265084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:08:25.163950  265084 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:08:25.163970  265084 kubeadm.go:626] restartCluster start
	I0531 18:08:25.163999  265084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:08:25.169734  265084 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.170732  265084 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220531175509-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:08:25.171454  265084 kubeconfig.go:127] "default-k8s-different-port-20220531175509-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:08:25.172470  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:08:25.173986  265084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:08:25.179942  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.179983  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.186965  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.387200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.387275  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.395846  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.587039  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.587105  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.595520  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.787853  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.787919  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.796087  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:25.987443  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:25.987592  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:25.995763  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.188042  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.188119  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.196440  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.387758  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.387821  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.395923  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.587200  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.587257  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.595464  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.787757  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.787847  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.796141  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:26.987434  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:26.987519  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:26.995749  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.188036  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.188093  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.196930  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.387163  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.387241  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.395603  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.587873  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.587940  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.596151  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.787448  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.787529  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.795830  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:27.987084  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:27.987165  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:27.995664  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.187945  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.188030  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.196347  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.196370  265084 api_server.go:165] Checking apiserver status ...
	I0531 18:08:28.196404  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:08:28.204102  265084 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.204125  265084 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:08:28.204132  265084 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:08:28.204145  265084 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:08:28.204198  265084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:08:28.227632  265084 cri.go:87] found id: "52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463"
	I0531 18:08:28.227660  265084 cri.go:87] found id: "cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783"
	I0531 18:08:28.227671  265084 cri.go:87] found id: "a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb"
	I0531 18:08:28.227679  265084 cri.go:87] found id: "1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11"
	I0531 18:08:28.227685  265084 cri.go:87] found id: "509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999"
	I0531 18:08:28.227691  265084 cri.go:87] found id: "ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f"
	I0531 18:08:28.227700  265084 cri.go:87] found id: ""
	I0531 18:08:28.227705  265084 cri.go:232] Stopping containers: [52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f]
	I0531 18:08:28.227754  265084 ssh_runner.go:195] Run: which crictl
	I0531 18:08:28.230377  265084 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 52b0fa46cdf5143bbea46cbe418b0950950da97638318ff6044b59e775374463 cb3e6f9b5d67c2a283bda9e0bbc0e0f517f7ab7b42f4976050713300284bd783 a2c6538b95f742f9d5ee21c5797a79941f858c5517eb1f4d2fdc9500c345a1bb 1b1996168f6e9be6a3f1640087343337555a8776f9af1d3d128b846529927e11 509e04aaab068f3dd2225d737ce0b1eca67939750af5c56342d9cf66e5c24999 ea294bc0a9be25e2f0928e99c49b04e5a4dd08f2b432ad868ae50c74aec0533f
	I0531 18:08:28.253379  265084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:08:28.263239  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:08:28.269611  265084 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 May 31 17:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 31 17:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 May 31 17:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 31 17:55 /etc/kubernetes/scheduler.conf
	
	I0531 18:08:28.269655  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0531 18:08:28.276169  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0531 18:08:28.282320  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.288727  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.288764  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:08:28.294577  265084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0531 18:08:28.300576  265084 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:08:28.300611  265084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:08:28.306535  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:28.313163  265084 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:08:28.313181  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.354378  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.801587  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.930245  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:28.977387  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:29.027665  265084 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:08:29.027728  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:29.536233  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.036182  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:30.536067  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.035853  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:31.535756  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.036379  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:32.536341  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:33.036406  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:33.536689  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.036411  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:34.536112  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.036299  265084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:08:35.111746  265084 api_server.go:71] duration metric: took 6.084083791s to wait for apiserver process to appear ...
	I0531 18:08:35.111779  265084 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:08:35.111789  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:35.112142  265084 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0531 18:08:35.612870  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.446439  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:08:38.446468  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:08:38.612744  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:38.618441  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:38.618510  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.113066  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.117198  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.117223  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:39.612302  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:39.616945  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:08:39.616967  265084 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:08:40.112481  265084 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0531 18:08:40.117156  265084 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0531 18:08:40.122751  265084 api_server.go:140] control plane version: v1.23.6
	I0531 18:08:40.122771  265084 api_server.go:130] duration metric: took 5.010986211s to wait for apiserver health ...
	I0531 18:08:40.122780  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:08:40.122788  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:08:40.124631  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:08:40.125848  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:08:40.129376  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:08:40.129394  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:08:40.142078  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:08:40.734264  265084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:08:40.740842  265084 system_pods.go:59] 9 kube-system pods found
	I0531 18:08:40.740873  265084 system_pods.go:61] "coredns-64897985d-92zgx" [b91e17cd-2735-4a67-a78b-9f06d1ea411e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740885  265084 system_pods.go:61] "etcd-default-k8s-different-port-20220531175509-6903" [13ef129d-4fca-4990-84b0-03bfdcfabf1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0531 18:08:40.740894  265084 system_pods.go:61] "kindnet-vdbp9" [79d3fb6a-0f34-4e42-809a-d4b9107ab071] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:08:40.740901  265084 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531175509-6903" [a547e888-b760-4d90-8f4c-50685def1dd3] Running
	I0531 18:08:40.740916  265084 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531175509-6903" [b23304a6-b5b1-4237-bfbb-6029f2c79380] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:08:40.740923  265084 system_pods.go:61] "kube-proxy-ff6gx" [4d094300-69cc-429e-8b17-52f2ddb8b9c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:08:40.740933  265084 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531175509-6903" [c7f2ccba-dc09-41b5-815a-1d7e16814c0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:08:40.740942  265084 system_pods.go:61] "metrics-server-b955d9d8-wvb9t" [f87f1c60-e753-4d02-8ae1-914a03b2b27a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740951  265084 system_pods.go:61] "storage-provisioner" [e1f494e4-cf90-42c5-b10b-93f3fff7bcc7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:08:40.740955  265084 system_pods.go:74] duration metric: took 6.673189ms to wait for pod list to return data ...
	I0531 18:08:40.740965  265084 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:08:40.743326  265084 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:08:40.743349  265084 node_conditions.go:123] node cpu capacity is 8
	I0531 18:08:40.743360  265084 node_conditions.go:105] duration metric: took 2.389262ms to run NodePressure ...
	I0531 18:08:40.743379  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:08:40.862173  265084 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865722  265084 kubeadm.go:777] kubelet initialised
	I0531 18:08:40.865747  265084 kubeadm.go:778] duration metric: took 3.542091ms waiting for restarted kubelet to initialise ...
	I0531 18:08:40.865755  265084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:08:40.870532  265084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	I0531 18:08:42.875463  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:44.876615  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:47.375518  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:49.375999  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:51.875901  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:53.876034  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:56.376247  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:08:58.875836  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:00.876113  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	* 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220531175509-6903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531175509-6903
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531175509-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a",
	        "Created": "2022-05-31T17:55:17.80847266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:08:09.029725982Z",
	            "FinishedAt": "2022-05-31T18:08:07.755765264Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hosts",
	        "LogPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a-json.log",
	        "Name": "/default-k8s-different-port-20220531175509-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531175509-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531175509-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531175509-6903",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531175509-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531175509-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71a74b75d7c8373b45e5345e309467d303c24cf6082ea84003df90e5a5173961",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/71a74b75d7c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531175509-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b24400321365",
	                        "default-k8s-different-port-20220531175509-6903"
	                    ],
	                    "NetworkID": "6fc1f79f54eab1e8df36883c8283b483c18aa0e383b30bdb7aa37eb035c0586e",
	                    "EndpointID": "0cf16e08dfbc9740717242d34f2958180fb422d48a507f378010469ef6cbd428",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220531175509-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| stop    | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:15 UTC | 31 May 22 18:15 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	436d78562200c       6de166512aa22       19 seconds ago      Exited              kindnet-cni               5                   c928703617d79
	455fbb97d03b9       4c03754524064       4 minutes ago       Running             kube-proxy                0                   5c14a7f925ed3
	12697fd1421e9       25f8c7f3da61c       4 minutes ago       Running             etcd                      2                   fa854a11b419b
	1d2e62f7898bb       595f327f224a4       4 minutes ago       Running             kube-scheduler            2                   69ec4dcae13da
	cb4b84c2abb44       df7b72818ad2e       4 minutes ago       Running             kube-controller-manager   2                   adb58aba13dda
	0eab29b61aa2c       8fa62c12256df       4 minutes ago       Running             kube-apiserver            2                   116fe8172c205
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:08:09 UTC, end at Tue 2022-05-31 18:17:10 UTC. --
	May 31 18:14:25 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:14:25.028951282Z" level=warning msg="cleaning up after shim disconnected" id=60581ccbdeda6d1d2d026e79e525e60c5921ef61d48c0f441dd47e085608ce1f namespace=k8s.io
	May 31 18:14:25 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:14:25.028965102Z" level=info msg="cleaning up dead shim"
	May 31 18:14:25 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:14:25.037933149Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:14:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4149 runtime=io.containerd.runc.v2\n"
	May 31 18:14:25 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:14:25.779304739Z" level=info msg="RemoveContainer for \"015ea903eb01da4e414788e0ce096964403adbb04085d5010cac9cff1d5b2577\""
	May 31 18:14:25 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:14:25.783514357Z" level=info msg="RemoveContainer for \"015ea903eb01da4e414788e0ce096964403adbb04085d5010cac9cff1d5b2577\" returns successfully"
	May 31 18:15:17 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:17.616220768Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	May 31 18:15:17 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:17.628782574Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\""
	May 31 18:15:17 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:17.629219441Z" level=info msg="StartContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\""
	May 31 18:15:17 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:17.716364535Z" level=info msg="StartContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\" returns successfully"
	May 31 18:15:27 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:27.938147983Z" level=info msg="shim disconnected" id=8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892
	May 31 18:15:27 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:27.938198285Z" level=warning msg="cleaning up after shim disconnected" id=8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892 namespace=k8s.io
	May 31 18:15:27 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:27.938209667Z" level=info msg="cleaning up dead shim"
	May 31 18:15:27 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:27.946854303Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:15:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4228 runtime=io.containerd.runc.v2\n"
	May 31 18:15:28 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:28.891885271Z" level=info msg="RemoveContainer for \"60581ccbdeda6d1d2d026e79e525e60c5921ef61d48c0f441dd47e085608ce1f\""
	May 31 18:15:28 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:15:28.896471375Z" level=info msg="RemoveContainer for \"60581ccbdeda6d1d2d026e79e525e60c5921ef61d48c0f441dd47e085608ce1f\" returns successfully"
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:16:50.616155839Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:16:50.628320330Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435\""
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:16:50.628822667Z" level=info msg="StartContainer for \"436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435\""
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:16:50.705618935Z" level=info msg="StartContainer for \"436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435\" returns successfully"
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.936355365Z" level=info msg="shim disconnected" id=436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.936418536Z" level=warning msg="cleaning up after shim disconnected" id=436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435 namespace=k8s.io
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.936432360Z" level=info msg="cleaning up dead shim"
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.945693053Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:17:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4309 runtime=io.containerd.runc.v2\n"
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:01.049666090Z" level=info msg="RemoveContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\""
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:01.053901723Z" level=info msg="RemoveContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531175509-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531175509-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:12:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531175509-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:17:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:13:07 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:13:07 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:13:07 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:13:07 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220531175509-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                6be22935-bf30-494f-8e0a-066b777ef988
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220531175509-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-gt5pn                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531175509-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531175509-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-tpq55                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531175509-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [12697fd1421e93b5d542c7d997c01069ce82acf0a8fc0aaea55e814d7935d1a8] <==
	* {"level":"info","ts":"2022-05-31T18:12:50.011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-05-31T18:12:50.012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220531175509-6903 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:17:10 up  1:59,  0 users,  load average: 0.27, 0.46, 0.82
	Linux default-k8s-different-port-20220531175509-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0eab29b61aa2c7e0f970b49b0f8056ab24dbd7be68969108e87bbdcfb92db41a] <==
	* I0531 18:12:54.237977       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:12:54.241062       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:12:54.792926       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:12:55.455892       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:12:55.462355       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:12:55.472831       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:13:00.609469       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:13:08.047585       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:13:08.547848       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:13:09.323466       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 18:13:10.616299       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.145.22]
	I0531 18:13:11.323751       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.235.175]
	I0531 18:13:11.333252       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.101.212.121]
	W0531 18:13:11.417521       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:13:11.417593       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:13:11.417610       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:14:11.417733       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:14:11.417795       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:14:11.417803       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:16:11.417944       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:16:11.418036       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:16:11.418052       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cb4b84c2abb44a68159288cb28f3587bdecfc650c3eee702f92f1181a79626da] <==
	* I0531 18:13:11.215723       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:13:11.218864       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:13:11.218908       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:13:11.221453       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:13:11.221859       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:13:11.228399       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:13:11.228402       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:13:11.304698       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-cwdrn"
	I0531 18:13:11.308181       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-nd4tk"
	E0531 18:13:37.865637       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:13:38.280998       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:14:07.883718       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:14:08.294070       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:14:37.897530       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:14:38.309666       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:07.913762       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:08.327309       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:37.931482       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:38.340331       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:16:07.946058       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:16:08.354270       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:16:37.962674       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:16:38.368209       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:17:07.977158       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:17:08.382979       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [455fbb97d03b9f72cb1f4a7f7f3c22f76652cadb8fc46891ed603d334db62140] <==
	* I0531 18:13:09.225018       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0531 18:13:09.225062       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0531 18:13:09.225093       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:13:09.320116       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:13:09.320155       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:13:09.320163       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:13:09.320175       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:13:09.320546       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:13:09.321064       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:13:09.321069       1 config.go:317] "Starting service config controller"
	I0531 18:13:09.321098       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:13:09.321096       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:13:09.421426       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:13:09.421457       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1d2e62f7898bb3d29c16e59fc578ac5fc7bc548fcb40e02c3b660f074314033b] <==
	* W0531 18:12:52.726679       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:12:52.727088       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:12:52.726698       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:12:52.727110       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:12:52.727124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:12:52.727125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:12:52.727541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:12:52.727647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:12:52.727703       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:12:52.727849       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:12:52.727874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:12:52.727133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:12:53.683196       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:12:53.683237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:12:53.712445       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:12:53.712495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:12:53.778372       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:12:53.778418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:12:53.806469       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:12:53.806496       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:12:53.874326       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:12:53.874364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:12:53.915573       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:12:53.915612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0531 18:12:54.221320       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:08:09 UTC, end at Tue 2022-05-31 18:17:10 UTC. --
	May 31 18:16:00 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:00.796569    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:05 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:05.798004    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:09 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:16:09.613395    3049 scope.go:110] "RemoveContainer" containerID="8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892"
	May 31 18:16:09 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:09.613802    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:16:10 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:10.799733    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:15 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:15.801139    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:20 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:16:20.613201    3049 scope.go:110] "RemoveContainer" containerID="8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892"
	May 31 18:16:20 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:20.613598    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:16:20 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:20.802416    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:25 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:25.803045    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:30 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:30.804675    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:35 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:16:35.613947    3049 scope.go:110] "RemoveContainer" containerID="8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892"
	May 31 18:16:35 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:35.614209    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:16:35 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:35.805895    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:40 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:40.807346    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:45 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:45.807964    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:16:50.613696    3049 scope.go:110] "RemoveContainer" containerID="8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892"
	May 31 18:16:50 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:50.809140    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:55 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:16:55.810735    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:17:00.812050    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:17:01.048335    3049 scope.go:110] "RemoveContainer" containerID="8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892"
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:17:01.048697    3049 scope.go:110] "RemoveContainer" containerID="436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435"
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:17:01.049095    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:17:05 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:17:05.813454    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:10 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:17:10.814367    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk: exit status 1 (55.583621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-qnj2l" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-q5pgx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-cwdrn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-nd4tk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (543.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (543.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0531 18:09:08.119943    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 18:09:11.143259    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:09:25.073464    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 18:09:48.440908    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 18:09:58.425130    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:09:59.713101    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:10:15.863195    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 18:11:05.002331    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 18:12:15.749781    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 18:13:38.792349    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 18:13:40.657751    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 18:13:57.610753    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 18:14:11.143814    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:14:25.073196    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 18:14:48.440790    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 18:14:58.424983    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:14:59.712378    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:15:15.863753    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: exit status 80 (9m1.274995648s)

                                                
                                                
-- stdout --
	* [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	* Pulling base image ...
	* Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:09.976861  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:12.476674  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:14.977142  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:17.477283  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:19.976577  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:21.978337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:24.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:26.476575  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:28.977103  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:31.476611  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:33.976344  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:35.977204  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:38.476416  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:40.977195  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:43.476141  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:45.476421  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:47.476462  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:49.476517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:51.477331  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:53.977100  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:56.476989  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:58.477779  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:00.976553  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:03.477250  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:05.976740  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.476618  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.978675  269289 node_ready.go:38] duration metric: took 4m0.015379225s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:18:08.980830  269289 out.go:177] 
	W0531 18:18:08.982370  269289 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:18:08.982392  269289 out.go:239] * 
	* 
	W0531 18:18:08.983213  269289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:18:08.984834  269289 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-20220531175604-6903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531175604-6903
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531175604-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f",
	        "Created": "2022-05-31T17:56:17.948185818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:09:08.465731029Z",
	            "FinishedAt": "2022-05-31T18:09:07.267063318Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f-json.log",
	        "Name": "/embed-certs-20220531175604-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531175604-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531175604-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531175604-6903",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531175604-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531175604-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad7aac916bee030844fa0e7c143e28fc250c0ad6f17da5b84d68ccafe87eb665",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ad7aac916bee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531175604-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8a0a6250b5",
	                        "embed-certs-20220531175604-6903"
	                    ],
	                    "NetworkID": "810e286ea2469d855f00ec56445da0705b1ca1a44b439a6e099264f06730a27d",
	                    "EndpointID": "22e5779a5560b488e880e110e17956fdd53eadbe6443a536098d446845659c35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220531175604-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:15 UTC | 31 May 22 18:15 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:17 UTC | 31 May 22 18:17 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 
	I0531 18:17:09.976861  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:12.476674  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:14.977142  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:17.477283  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:19.976577  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:21.978337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:24.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:26.476575  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:28.977103  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:31.476611  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:33.976344  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:35.977204  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:38.476416  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:40.977195  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:43.476141  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:45.476421  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:47.476462  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:49.476517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:51.477331  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:53.977100  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:56.476989  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:58.477779  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:00.976553  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:03.477250  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:05.976740  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.476618  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.978675  269289 node_ready.go:38] duration metric: took 4m0.015379225s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:18:08.980830  269289 out.go:177] 
	W0531 18:18:08.982370  269289 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:18:08.982392  269289 out.go:239] * 
	W0531 18:18:08.983213  269289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:18:08.984834  269289 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	fe70a30634ea9       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   e4c8266a862fc
	44bc935b7eaae       4c03754524064       3 minutes ago        Running             kube-proxy                0                   e83bbc46b3d7b
	837b6342a4f49       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   e4c8266a862fc
	68cad910900a4       595f327f224a4       4 minutes ago        Running             kube-scheduler            2                   292f35260c680
	86a97a48de4c0       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   7f60645170e76
	bc23fd1cfc64c       df7b72818ad2e       4 minutes ago        Running             kube-controller-manager   2                   42a69ebb96716
	3eb0415d100e5       8fa62c12256df       4 minutes ago        Running             kube-apiserver            2                   0066133f16a45
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:09:08 UTC, end at Tue 2022-05-31 18:18:10 UTC. --
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.727109360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.727124605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.727468155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e83bbc46b3d7b3214e21f59f2aea4602e71bd584c4c4e2227e8366723f10123a pid=3433 runtime=io.containerd.runc.v2
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.904369516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2cxvx,Uid:acc14297-39c7-4997-9785-f1c36fe06ea9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\""
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.907920408Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.925006183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffdqp,Uid:f099deaf-1ece-41f4-9910-f2475324f0ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"e83bbc46b3d7b3214e21f59f2aea4602e71bd584c4c4e2227e8366723f10123a\""
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.928358649Z" level=info msg="CreateContainer within sandbox \"e83bbc46b3d7b3214e21f59f2aea4602e71bd584c4c4e2227e8366723f10123a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.930312395Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a\""
	May 31 18:14:09 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:09.931074707Z" level=info msg="StartContainer for \"837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a\""
	May 31 18:14:10 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:10.125910307Z" level=info msg="CreateContainer within sandbox \"e83bbc46b3d7b3214e21f59f2aea4602e71bd584c4c4e2227e8366723f10123a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44bc935b7eaae3b821bde04fc2059159d8351a1ce19b072cbedafb551488d14f\""
	May 31 18:14:10 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:10.201749172Z" level=info msg="StartContainer for \"44bc935b7eaae3b821bde04fc2059159d8351a1ce19b072cbedafb551488d14f\""
	May 31 18:14:10 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:10.329912270Z" level=info msg="StartContainer for \"837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a\" returns successfully"
	May 31 18:14:10 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:14:10.336781177Z" level=info msg="StartContainer for \"44bc935b7eaae3b821bde04fc2059159d8351a1ce19b072cbedafb551488d14f\" returns successfully"
	May 31 18:15:00 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:15:00.930590960Z" level=error msg="ContainerStatus for \"16d4f2c3fbc38177fa765442de587b61998e3c28eb17658c8ff7db3534d99c4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16d4f2c3fbc38177fa765442de587b61998e3c28eb17658c8ff7db3534d99c4a\": not found"
	May 31 18:15:00 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:15:00.931635306Z" level=error msg="ContainerStatus for \"8a6da5f7ec6ee5296c0ac1c46e8aaba6ea42e2ee7ee9559caaebea8733776e38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a6da5f7ec6ee5296c0ac1c46e8aaba6ea42e2ee7ee9559caaebea8733776e38\": not found"
	May 31 18:15:00 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:15:00.932340198Z" level=error msg="ContainerStatus for \"2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843\": not found"
	May 31 18:15:00 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:15:00.932824200Z" level=error msg="ContainerStatus for \"5729d74ef32936a9c83df420f2ce9b8c0959fe521ace63a98a579d7b2aa75993\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5729d74ef32936a9c83df420f2ce9b8c0959fe521ace63a98a579d7b2aa75993\": not found"
	May 31 18:16:50 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:50.646794853Z" level=info msg="shim disconnected" id=837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a
	May 31 18:16:50 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:50.646853561Z" level=warning msg="cleaning up after shim disconnected" id=837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a namespace=k8s.io
	May 31 18:16:50 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:50.646863278Z" level=info msg="cleaning up dead shim"
	May 31 18:16:50 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:50.656132423Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:16:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3675 runtime=io.containerd.runc.v2\n"
	May 31 18:16:51 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:51.474206937Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	May 31 18:16:51 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:51.486425311Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"fe70a30634ea97d62f66f1194dd9aa88573ebb2cf084f0d7ca32e561152178fa\""
	May 31 18:16:51 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:51.486802735Z" level=info msg="StartContainer for \"fe70a30634ea97d62f66f1194dd9aa88573ebb2cf084f0d7ca32e561152178fa\""
	May 31 18:16:51 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:16:51.705528185Z" level=info msg="StartContainer for \"fe70a30634ea97d62f66f1194dd9aa88573ebb2cf084f0d7ca32e561152178fa\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531175604-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531175604-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531175604-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:13:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531175604-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:18:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:14:07 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:14:07 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:14:07 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:14:07 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220531175604-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                9377e8f5-ae2b-465c-b601-bd790903b8eb
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220531175604-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-2cxvx                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220531175604-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-embed-certs-20220531175604-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-ffdqp                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220531175604-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m59s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  4m21s (x5 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [86a97a48de4c022f7d4dd27bedcede1b2552effe64b4c218f7ca4157ffaa5033] <==
	* {"level":"info","ts":"2022-05-31T18:13:49.902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-05-31T18:13:49.902Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220531175604-6903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:13:50.634Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:13:50.634Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  18:18:10 up  2:00,  0 users,  load average: 0.25, 0.42, 0.79
	Linux embed-certs-20220531175604-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3eb0415d100e52bfe0c1104b9ddf8b526fb204e0137f70d1c939bf1abb69a44e] <==
	* I0531 18:13:54.339319       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 18:13:54.342589       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:13:54.949539       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 18:13:55.862011       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 18:13:55.869476       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 18:13:55.877380       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 18:14:01.014276       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:14:08.605319       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0531 18:14:08.705422       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:14:10.221824       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.109.78.0]
	I0531 18:14:10.540687       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.83.248]
	I0531 18:14:10.610067       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.121.17]
	I0531 18:14:10.729571       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W0531 18:14:11.101861       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:14:11.101943       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:14:11.101960       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:15:11.103117       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:15:11.103199       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:15:11.103208       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:17:11.104115       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:17:11.104215       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:17:11.104233       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bc23fd1cfc64c42bba5c81e5279c39f1c49db486de602c2efbcd6b3eb2c19f97] <==
	* I0531 18:14:10.421808       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:14:10.424904       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:14:10.425136       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:14:10.426104       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:14:10.426138       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 18:14:10.431270       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 18:14:10.431304       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 18:14:10.506016       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-h54ht"
	I0531 18:14:10.507515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-znnfl"
	E0531 18:14:38.021667       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:14:38.441159       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:08.039250       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:08.455355       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:15:38.057649       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:15:38.467995       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:16:08.073394       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:16:08.481837       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:16:38.090863       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:16:38.496048       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:17:08.104722       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:17:08.510534       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:17:38.119225       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:17:38.525132       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:18:08.132774       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:18:08.538749       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [44bc935b7eaae3b821bde04fc2059159d8351a1ce19b072cbedafb551488d14f] <==
	* I0531 18:14:10.520846       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 18:14:10.520915       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 18:14:10.520954       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:14:10.725611       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:14:10.725654       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:14:10.725666       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:14:10.725691       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:14:10.726120       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:14:10.726896       1 config.go:317] "Starting service config controller"
	I0531 18:14:10.726928       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:14:10.727273       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:14:10.727294       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:14:10.827115       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:14:10.827828       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [68cad910900a461fb5de4d316889c9efed39c08a2b46073700308723fac57649] <==
	* W0531 18:13:53.016951       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:13:53.018021       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:13:53.017084       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:13:53.018050       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:13:53.018160       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:13:53.018227       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:13:53.018334       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:13:53.018373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:13:53.018387       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:13:53.018426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:13:53.018453       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:13:53.018490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:13:53.018458       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:13:53.018510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:13:53.902751       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:13:53.902799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:13:53.909875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:13:53.909929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:13:53.920925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:13:53.920954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:13:54.019088       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:13:54.019118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:13:54.102584       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:13:54.102632       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:13:56.308272       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:09:08 UTC, end at Tue 2022-05-31 18:18:10 UTC. --
	May 31 18:16:11 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:11.204264    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:16 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:16.205093    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:21 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:21.206221    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:26 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:26.207193    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:31 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:31.208135    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:36 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:36.209143    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:41 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:41.210363    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:46 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:46.211311    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:51 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:51.212392    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:16:51 embed-certs-20220531175604-6903 kubelet[2864]: I0531 18:16:51.472267    2864 scope.go:110] "RemoveContainer" containerID="837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a"
	May 31 18:16:56 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:16:56.213966    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:01 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:01.214899    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:06 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:06.216116    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:11 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:11.217829    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:16 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:16.219170    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:21 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:21.220700    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:26 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:26.222341    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:31 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:31.223184    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:36 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:36.224801    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:41 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:41.226286    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:46 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:46.227942    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:51 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:51.228655    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:17:56 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:17:56.230155    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:18:01 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:18:01.231638    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:18:06 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:18:06.232623    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht: exit status 1 (53.755899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-tnlml" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-6mjhp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-znnfl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-h54ht" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (543.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-269mb" [27c253b0-ab20-409c-848b-45ac6b4810be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 18:15:48.049724    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 18:16:05.002176    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 18:16:21.470191    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:16:22.756416    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:23:57.610861    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:24:11.142923    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:24:25.072965    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
start_stop_delete_test.go:276: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-05-31 18:24:34.42737186 +0000 UTC m=+4325.952518127
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe po kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe po kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard: context deadline exceeded (1.522µs)
start_stop_delete_test.go:276: kubectl --context no-preload-20220531175323-6903 describe po kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 logs kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 logs kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard: context deadline exceeded (183ns)
start_stop_delete_test.go:276: kubectl --context no-preload-20220531175323-6903 logs kubernetes-dashboard-8469778f77-269mb -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220531175323-6903
helpers_test.go:235: (dbg) docker inspect no-preload-20220531175323-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d",
	        "Created": "2022-05-31T17:53:25.199469079Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261508,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:06:32.565667778Z",
	            "FinishedAt": "2022-05-31T18:06:31.347829206Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/hosts",
	        "LogPath": "/var/lib/docker/containers/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d/a4f33d13fefce061114b57e7f9701c41bc75e924e4264b02543146b7a16f789d-json.log",
	        "Name": "/no-preload-20220531175323-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220531175323-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220531175323-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c45af836cb0594cac32f4d8e788ae5b96fafe365342d110045d163abdab5e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220531175323-6903",
	                "Source": "/var/lib/docker/volumes/no-preload-20220531175323-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220531175323-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "name.minikube.sigs.k8s.io": "no-preload-20220531175323-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65d77ba8a692af3c9abf23c596fe50443fb99421003d3dd566b15d4ac739a15f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/65d77ba8a692",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220531175323-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4f33d13fefc",
	                        "no-preload-20220531175323-6903"
	                    ],
	                    "NetworkID": "b2391a84ebd8e16dd2e9aca80777d6d03045cffc9cfc8290f45a61a1473c3244",
	                    "EndpointID": "2d286acc05ba36111035d982d1c124c6d8d7725e9ab99431bd3a13dd88d7ed81",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220531175323-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:00 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| start   | -p newest-cni-20220531175602-6903 --memory=2200            | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:00 UTC | 31 May 22 18:01 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                             | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                             |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903             |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                            |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                             | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:15 UTC | 31 May 22 18:15 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903             | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:17 UTC | 31 May 22 18:17 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                            | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:18 UTC | 31 May 22 18:18 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 
	I0531 18:17:09.976861  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:12.476674  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:14.977142  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:17.477283  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:19.976577  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:21.978337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:24.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:26.476575  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:28.977103  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:31.476611  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:33.976344  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:35.977204  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:38.476416  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:40.977195  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:43.476141  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:45.476421  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:47.476462  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:49.476517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:51.477331  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:53.977100  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:56.476989  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:58.477779  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:00.976553  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:03.477250  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:05.976740  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.476618  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.978675  269289 node_ready.go:38] duration metric: took 4m0.015379225s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:18:08.980830  269289 out.go:177] 
	W0531 18:18:08.982370  269289 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:18:08.982392  269289 out.go:239] * 
	W0531 18:18:08.983213  269289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:18:08.984834  269289 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	077c0ae7aaa62       6de166512aa22       46 seconds ago      Running             kindnet-cni               4                   ada1687a9236a
	74af9004ff480       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   ada1687a9236a
	46ae8b49a2f40       4c03754524064       13 minutes ago      Running             kube-proxy                0                   17a4d7d0aae07
	19553e3109d01       595f327f224a4       13 minutes ago      Running             kube-scheduler            2                   acca4113a0648
	da2122c0c30c1       8fa62c12256df       13 minutes ago      Running             kube-apiserver            2                   d6e71d2677426
	434c691688029       df7b72818ad2e       13 minutes ago      Running             kube-controller-manager   2                   9cfc7f577b371
	26844adc7521e       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   63e1a77a97e58
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:06:32 UTC, end at Tue 2022-05-31 18:24:35 UTC. --
	May 31 18:16:55 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:16:55.370980295Z" level=info msg="RemoveContainer for \"3dde6d2f94876ab012c0b4c98d7d32ff7c5a157c3dd7e894140a36e8d24f9fff\" returns successfully"
	May 31 18:17:10 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:17:10.726915340Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 18:17:10 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:17:10.739423344Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2\""
	May 31 18:17:10 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:17:10.740794690Z" level=info msg="StartContainer for \"019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2\""
	May 31 18:17:10 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:17:10.816673964Z" level=info msg="StartContainer for \"019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2\" returns successfully"
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.131193822Z" level=info msg="shim disconnected" id=019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.131266309Z" level=warning msg="cleaning up after shim disconnected" id=019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2 namespace=k8s.io
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.131279651Z" level=info msg="cleaning up dead shim"
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.139938137Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:19:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4182 runtime=io.containerd.runc.v2\n"
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.671783214Z" level=info msg="RemoveContainer for \"de47473beb36b8f765ea3845c0b7e422906b2af82381f3a5778f4beeeba0c624\""
	May 31 18:19:51 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:19:51.675997183Z" level=info msg="RemoveContainer for \"de47473beb36b8f765ea3845c0b7e422906b2af82381f3a5778f4beeeba0c624\" returns successfully"
	May 31 18:20:18 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:20:18.726139004Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:20:18 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:20:18.737639329Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c\""
	May 31 18:20:18 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:20:18.738085241Z" level=info msg="StartContainer for \"74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c\""
	May 31 18:20:18 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:20:18.805099986Z" level=info msg="StartContainer for \"74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c\" returns successfully"
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.036021438Z" level=info msg="shim disconnected" id=74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.036089371Z" level=warning msg="cleaning up after shim disconnected" id=74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c namespace=k8s.io
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.036102229Z" level=info msg="cleaning up dead shim"
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.045393472Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:22:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4285 runtime=io.containerd.runc.v2\n"
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.969490701Z" level=info msg="RemoveContainer for \"019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2\""
	May 31 18:22:59 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:22:59.973752407Z" level=info msg="RemoveContainer for \"019192d2188d45808b026f85963c44e74dabefdd1c8fa31f223dbe0c25f546a2\" returns successfully"
	May 31 18:23:48 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:23:48.727013217Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	May 31 18:23:48 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:23:48.739023363Z" level=info msg="CreateContainer within sandbox \"ada1687a9236a1f0f6fb992e712748c9c8799a4dd82e615d2c91e03ded89730e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"077c0ae7aaa62e707f394b94c64a6112744ae6b2e42db35d21d4394a2c0d4ea1\""
	May 31 18:23:48 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:23:48.739480123Z" level=info msg="StartContainer for \"077c0ae7aaa62e707f394b94c64a6112744ae6b2e42db35d21d4394a2c0d4ea1\""
	May 31 18:23:48 no-preload-20220531175323-6903 containerd[380]: time="2022-05-31T18:23:48.816278270Z" level=info msg="StartContainer for \"077c0ae7aaa62e707f394b94c64a6112744ae6b2e42db35d21d4394a2c0d4ea1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220531175323-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220531175323-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=no-preload-20220531175323-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:11:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220531175323-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:24:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:21:46 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:21:46 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:21:46 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:21:46 +0000   Tue, 31 May 2022 18:11:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220531175323-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                3f650030-6900-444d-b03b-802678a62df1
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220531175323-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-s4rf7                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-no-preload-20220531175323-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-20220531175323-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-m75cf                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-20220531175323-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node no-preload-20220531175323-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [26844adc7521e3998a8fd7eb5959acfe71aef6577d68e710c3fc6d6d97fe5939] <==
	* {"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-05-31T18:11:13.012Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220531175323-6903 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.703Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:11:13.704Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-05-31T18:11:13.704Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:21:13.716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":642}
	{"level":"info","ts":"2022-05-31T18:21:13.717Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":642,"took":"564.626µs"}
	
	* 
	* ==> kernel <==
	*  18:24:35 up  2:07,  0 users,  load average: 0.24, 0.23, 0.56
	Linux no-preload-20220531175323-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [da2122c0c30c19a146de1126066a9662a3593887fda1084cb52b23bd621aedac] <==
	* I0531 18:14:34.306598       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:16:16.416957       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:16:16.417031       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:16:16.417041       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:17:16.418123       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:17:16.418163       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:17:16.418170       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:19:16.418422       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:19:16.418501       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:19:16.418516       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:21:16.422859       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:21:16.422968       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:21:16.422992       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:22:16.423928       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:22:16.424001       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:22:16.424009       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:24:16.424754       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:24:16.424835       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:24:16.424852       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [434c691688029a16594dcace8e5cd18542a4229065076549eda61aee4dd3471c] <==
	* W0531 18:18:31.931255       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:19:01.529763       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:19:01.944935       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:19:31.542877       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:19:31.958929       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:20:01.554239       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:20:01.977675       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:20:31.566115       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:20:31.991474       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:21:01.577971       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:21:02.006926       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:21:31.588173       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:21:32.021818       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:01.599655       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:02.037644       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:31.609686       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:32.051596       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:01.618009       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:02.065215       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:31.627254       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:32.080353       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:01.641080       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:02.094429       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:31.654521       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:32.107799       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [46ae8b49a2f40a2cfbd705f82fa54f8df0a59683b743d77c8ded4297a54aca3e] <==
	* I0531 18:11:33.203064       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 18:11:33.203207       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 18:11:33.203901       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:11:33.318321       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:11:33.318358       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:11:33.318369       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:11:33.318386       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:11:33.318744       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:11:33.319314       1 config.go:317] "Starting service config controller"
	I0531 18:11:33.319353       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:11:33.319317       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:11:33.319439       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:11:33.419858       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 18:11:33.420033       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [19553e3109d01af350a34965aa8b487908f950b1367f8f44363976e2b121b2d5] <==
	* W0531 18:11:15.420123       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:11:15.420191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:11:15.420143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:11:15.420094       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.419916       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.420215       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:11:15.420242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:11:15.420242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:15.420641       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:11:15.420679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:11:15.420701       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:15.421027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.318206       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.318242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.395823       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.395882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.428498       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:11:16.428534       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:11:16.482737       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.482776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.485585       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:11:16.485613       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:11:16.630989       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:11:16.631027       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:11:19.315480       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:06:32 UTC, end at Tue 2022-05-31 18:24:35 UTC. --
	May 31 18:23:09 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:09.078292    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:10 no-preload-20220531175323-6903 kubelet[2847]: I0531 18:23:10.724549    2847 scope.go:110] "RemoveContainer" containerID="74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c"
	May 31 18:23:10 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:10.724944    2847 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-s4rf7_kube-system(478a0044-cd97-4cf0-9805-be336cddfb83)\"" pod="kube-system/kindnet-s4rf7" podUID=478a0044-cd97-4cf0-9805-be336cddfb83
	May 31 18:23:14 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:14.079502    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:19 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:19.080881    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:21 no-preload-20220531175323-6903 kubelet[2847]: I0531 18:23:21.724625    2847 scope.go:110] "RemoveContainer" containerID="74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c"
	May 31 18:23:21 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:21.725014    2847 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-s4rf7_kube-system(478a0044-cd97-4cf0-9805-be336cddfb83)\"" pod="kube-system/kindnet-s4rf7" podUID=478a0044-cd97-4cf0-9805-be336cddfb83
	May 31 18:23:24 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:24.082027    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:29 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:29.083329    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:33 no-preload-20220531175323-6903 kubelet[2847]: I0531 18:23:33.724242    2847 scope.go:110] "RemoveContainer" containerID="74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c"
	May 31 18:23:33 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:33.724581    2847 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-s4rf7_kube-system(478a0044-cd97-4cf0-9805-be336cddfb83)\"" pod="kube-system/kindnet-s4rf7" podUID=478a0044-cd97-4cf0-9805-be336cddfb83
	May 31 18:23:34 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:34.084596    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:39 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:39.085569    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:44 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:44.086308    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:48 no-preload-20220531175323-6903 kubelet[2847]: I0531 18:23:48.724613    2847 scope.go:110] "RemoveContainer" containerID="74af9004ff4803019b1f2dedcd75f7b12be87e3ed9fc564804b9c55c1734407c"
	May 31 18:23:49 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:49.087367    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:54 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:54.088384    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:23:59 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:23:59.089123    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:04 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:04.090096    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:09 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:09.091241    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:14 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:14.092301    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:19 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:19.093823    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:24 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:24.094691    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:29 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:29.096004    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:24:34 no-preload-20220531175323-6903 kubelet[2847]: E0531 18:24:34.097646    2847 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb: exit status 1 (53.163556ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-r6lzx" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-nfwnt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-cnl68" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-269mb" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220531175323-6903 describe pod coredns-64897985d-r6lzx metrics-server-b955d9d8-nfwnt storage-provisioner dashboard-metrics-scraper-56974995fc-cnl68 kubernetes-dashboard-8469778f77-269mb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-nd4tk" [8b786045-7d91-47dd-909c-fb8e151feef0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 18:17:15.750026    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:25:15.863644    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:25:48.120907    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
start_stop_delete_test.go:276: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-05-31 18:26:11.899103907 +0000 UTC m=+4423.424250164
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe po kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe po kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard: context deadline exceeded (1.527µs)
start_stop_delete_test.go:276: kubectl --context default-k8s-different-port-20220531175509-6903 describe po kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 logs kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 logs kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard: context deadline exceeded (168ns)
start_stop_delete_test.go:276: kubectl --context default-k8s-different-port-20220531175509-6903 logs kubernetes-dashboard-8469778f77-nd4tk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220531175509-6903
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220531175509-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a",
	        "Created": "2022-05-31T17:55:17.80847266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:08:09.029725982Z",
	            "FinishedAt": "2022-05-31T18:08:07.755765264Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hostname",
	        "HostsPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/hosts",
	        "LogPath": "/var/lib/docker/containers/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a/b24400321365b52d1450f04803831b96d9fe8bf8a043e59c8a9ce2f1eb37538a-json.log",
	        "Name": "/default-k8s-different-port-20220531175509-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220531175509-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220531175509-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a9c4dd03c4cc61fa57ce6326a7107df3f62e81a356af7510c11de29a2413d47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220531175509-6903",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220531175509-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220531175509-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220531175509-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71a74b75d7c8373b45e5345e309467d303c24cf6082ea84003df90e5a5173961",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/71a74b75d7c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220531175509-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b24400321365",
	                        "default-k8s-different-port-20220531175509-6903"
	                    ],
	                    "NetworkID": "6fc1f79f54eab1e8df36883c8283b483c18aa0e383b30bdb7aa37eb035c0586e",
	                    "EndpointID": "0cf16e08dfbc9740717242d34f2958180fb422d48a507f378010469ef6cbd428",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220531175509-6903 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                    |                                                |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                    | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | newest-cni-20220531175602-6903                    | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                    |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:15 UTC | 31 May 22 18:15 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:17 UTC | 31 May 22 18:17 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:18 UTC | 31 May 22 18:18 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:24 UTC | 31 May 22 18:24 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:24 UTC | 31 May 22 18:24 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 
	I0531 18:17:09.976861  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:12.476674  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:14.977142  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:17.477283  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:19.976577  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:21.978337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:24.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:26.476575  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:28.977103  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:31.476611  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:33.976344  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:35.977204  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:38.476416  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:40.977195  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:43.476141  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:45.476421  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:47.476462  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:49.476517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:51.477331  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:53.977100  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:56.476989  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:58.477779  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:00.976553  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:03.477250  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:05.976740  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.476618  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.978675  269289 node_ready.go:38] duration metric: took 4m0.015379225s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:18:08.980830  269289 out.go:177] 
	W0531 18:18:08.982370  269289 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:18:08.982392  269289 out.go:239] * 
	W0531 18:18:08.983213  269289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:18:08.984834  269289 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	3203e4e76ecf0       6de166512aa22       About a minute ago   Exited              kindnet-cni               7                   c928703617d79
	455fbb97d03b9       4c03754524064       13 minutes ago       Running             kube-proxy                0                   5c14a7f925ed3
	12697fd1421e9       25f8c7f3da61c       13 minutes ago       Running             etcd                      2                   fa854a11b419b
	1d2e62f7898bb       595f327f224a4       13 minutes ago       Running             kube-scheduler            2                   69ec4dcae13da
	cb4b84c2abb44       df7b72818ad2e       13 minutes ago       Running             kube-controller-manager   2                   adb58aba13dda
	0eab29b61aa2c       8fa62c12256df       13 minutes ago       Running             kube-apiserver            2                   116fe8172c205
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:08:09 UTC, end at Tue 2022-05-31 18:26:12 UTC. --
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.936418536Z" level=warning msg="cleaning up after shim disconnected" id=436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435 namespace=k8s.io
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.936432360Z" level=info msg="cleaning up dead shim"
	May 31 18:17:00 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:00.945693053Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:17:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4309 runtime=io.containerd.runc.v2\n"
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:01.049666090Z" level=info msg="RemoveContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\""
	May 31 18:17:01 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:17:01.053901723Z" level=info msg="RemoveContainer for \"8bc1c5c0200ea1776b1c96530943367fa8ddf9f983d46f752799f2edd01b8892\" returns successfully"
	May 31 18:19:41 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:41.615911355Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	May 31 18:19:41 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:41.627450619Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81\""
	May 31 18:19:41 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:41.627882565Z" level=info msg="StartContainer for \"2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81\""
	May 31 18:19:41 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:41.696663157Z" level=info msg="StartContainer for \"2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81\" returns successfully"
	May 31 18:19:51 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:51.935039644Z" level=info msg="shim disconnected" id=2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81
	May 31 18:19:51 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:51.935110064Z" level=warning msg="cleaning up after shim disconnected" id=2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81 namespace=k8s.io
	May 31 18:19:51 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:51.935120242Z" level=info msg="cleaning up dead shim"
	May 31 18:19:51 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:51.944041448Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:19:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4639 runtime=io.containerd.runc.v2\n"
	May 31 18:19:52 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:52.350236661Z" level=info msg="RemoveContainer for \"436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435\""
	May 31 18:19:52 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:19:52.354304568Z" level=info msg="RemoveContainer for \"436d78562200c4bb36bfedc4e9a3a896b337d2359180a07aadef782a1afbc435\" returns successfully"
	May 31 18:24:57 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:24:57.616020584Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	May 31 18:24:57 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:24:57.628043224Z" level=info msg="CreateContainer within sandbox \"c928703617d79c73d40117ac859d5d74c21fa1b208ad5c27fb9062be1d53a963\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5\""
	May 31 18:24:57 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:24:57.628506273Z" level=info msg="StartContainer for \"3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5\""
	May 31 18:24:57 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:24:57.716120388Z" level=info msg="StartContainer for \"3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5\" returns successfully"
	May 31 18:25:07 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:07.937922158Z" level=info msg="shim disconnected" id=3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5
	May 31 18:25:07 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:07.937974724Z" level=warning msg="cleaning up after shim disconnected" id=3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5 namespace=k8s.io
	May 31 18:25:07 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:07.937987371Z" level=info msg="cleaning up dead shim"
	May 31 18:25:07 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:07.947539159Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:25:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4742 runtime=io.containerd.runc.v2\n"
	May 31 18:25:08 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:08.878900538Z" level=info msg="RemoveContainer for \"2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81\""
	May 31 18:25:08 default-k8s-different-port-20220531175509-6903 containerd[380]: time="2022-05-31T18:25:08.883133037Z" level=info msg="RemoveContainer for \"2ed1e5633f5b41ead3ef25499133f2607c05b4304a39e5959b0e55a87946ff81\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220531175509-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220531175509-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:12:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220531175509-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:26:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:23:22 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:23:22 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:23:22 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:23:22 +0000   Tue, 31 May 2022 18:12:50 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220531175509-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                6be22935-bf30-494f-8e0a-066b777ef988
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220531175509-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-gt5pn                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220531175509-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220531175509-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tpq55                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220531175509-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node default-k8s-different-port-20220531175509-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [12697fd1421e93b5d542c7d997c01069ce82acf0a8fc0aaea55e814d7935d1a8] <==
	* {"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:12:50.013Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220531175509-6903 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:12:50.905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-05-31T18:12:50.906Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:22:50.918Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":690}
	{"level":"info","ts":"2022-05-31T18:22:50.921Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":690,"took":"2.022319ms"}
	
	* 
	* ==> kernel <==
	*  18:26:13 up  2:08,  0 users,  load average: 0.34, 0.27, 0.54
	Linux default-k8s-different-port-20220531175509-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0eab29b61aa2c7e0f970b49b0f8056ab24dbd7be68969108e87bbdcfb92db41a] <==
	* I0531 18:16:11.418052       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:17:53.689561       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:17:53.689686       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:17:53.689717       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:18:53.690655       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:18:53.690742       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:18:53.690757       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:20:53.691136       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:20:53.691236       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:20:53.691246       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:22:53.695597       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:22:53.695688       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:22:53.695704       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:23:53.696014       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:23:53.696108       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:23:53.696123       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:25:53.696850       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:25:53.696930       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:25:53.696944       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cb4b84c2abb44a68159288cb28f3587bdecfc650c3eee702f92f1181a79626da] <==
	* W0531 18:20:08.471037       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:20:38.062694       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:20:38.485803       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:21:08.073151       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:21:08.498770       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:21:38.083627       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:21:38.513551       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:08.093317       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:08.530672       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:38.102445       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:38.546222       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:08.113743       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:08.560352       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:38.124366       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:38.575070       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:08.148540       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:08.591086       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:38.172438       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:38.605373       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:25:08.195329       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:25:08.621968       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:25:38.208817       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:25:38.636077       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:26:08.229686       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:26:08.649843       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [455fbb97d03b9f72cb1f4a7f7f3c22f76652cadb8fc46891ed603d334db62140] <==
	* I0531 18:13:09.225018       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0531 18:13:09.225062       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0531 18:13:09.225093       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:13:09.320116       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:13:09.320155       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:13:09.320163       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:13:09.320175       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:13:09.320546       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:13:09.321064       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:13:09.321069       1 config.go:317] "Starting service config controller"
	I0531 18:13:09.321098       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:13:09.321096       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:13:09.421426       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:13:09.421457       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [1d2e62f7898bb3d29c16e59fc578ac5fc7bc548fcb40e02c3b660f074314033b] <==
	* W0531 18:12:52.726679       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:12:52.727088       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:12:52.726698       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:12:52.727110       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:12:52.727124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:12:52.727125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:12:52.727541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:12:52.727647       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:12:52.727703       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:12:52.727849       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:12:52.727874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:12:52.727133       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:12:53.683196       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:12:53.683237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:12:53.712445       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:12:53.712495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:12:53.778372       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:12:53.778418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:12:53.806469       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:12:53.806496       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:12:53.874326       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:12:53.874364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:12:53.915573       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:12:53.915612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0531 18:12:54.221320       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:08:09 UTC, end at Tue 2022-05-31 18:26:13 UTC. --
	May 31 18:25:08 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:25:08.877914    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:25:08 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:08.878252    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:25:10 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:10.927852    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:15 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:15.929065    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:20 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:20.930151    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:21 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:25:21.613447    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:25:21 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:21.613786    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:25:25 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:25.931473    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:30 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:30.933073    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:34 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:25:34.613419    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:25:34 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:34.613680    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:25:35 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:35.934430    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:40 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:40.935679    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:45 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:25:45.613669    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:25:45 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:45.613947    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:25:45 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:45.936825    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:50 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:50.938387    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:55 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:55.939949    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:57 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:25:57.613072    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:25:57 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:25:57.613378    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:26:00 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:26:00.940809    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:05 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:26:05.942315    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:09 default-k8s-different-port-20220531175509-6903 kubelet[3049]: I0531 18:26:09.613149    3049 scope.go:110] "RemoveContainer" containerID="3203e4e76ecf0d6c515024e0fe55e45e94d687f43ab666475ee31571eddb2ff5"
	May 31 18:26:09 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:26:09.613436    3049 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-gt5pn_kube-system(f6c2aec5-be4f-49c9-abdd-bc9f7954587c)\"" pod="kube-system/kindnet-gt5pn" podUID=f6c2aec5-be4f-49c9-abdd-bc9f7954587c
	May 31 18:26:10 default-k8s-different-port-20220531175509-6903 kubelet[3049]: E0531 18:26:10.943515    3049 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk: exit status 1 (52.161072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-qnj2l" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-q5pgx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-cwdrn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-nd4tk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220531175509-6903 describe pod coredns-64897985d-qnj2l metrics-server-b955d9d8-q5pgx storage-provisioner dashboard-metrics-scraper-56974995fc-cwdrn kubernetes-dashboard-8469778f77-nd4tk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-h54ht" [41b64e7e-8b48-4d8b-be18-0cf891fa0509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0531 18:18:57.610365    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 18:19:11.143565    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:19:25.073390    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 18:19:48.440832    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 18:19:58.425768    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:19:59.712564    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
E0531 18:20:15.863533    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 18:21:05.001965    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 18:22:14.188043    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 18:22:15.749799    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/old-k8s-version-20220531174534-6903/client.crt: no such file or directory
E0531 18:22:51.487930    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 18:23:18.907248    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0531 18:26:05.001968    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:276: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
start_stop_delete_test.go:276: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-05-31 18:27:11.362202546 +0000 UTC m=+4482.887348813
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe po kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe po kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard: context deadline exceeded (1.414µs)
start_stop_delete_test.go:276: kubectl --context embed-certs-20220531175604-6903 describe po kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 logs kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 logs kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard: context deadline exceeded (132ns)
start_stop_delete_test.go:276: kubectl --context embed-certs-20220531175604-6903 logs kubernetes-dashboard-8469778f77-h54ht -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531175604-6903
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531175604-6903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f",
	        "Created": "2022-05-31T17:56:17.948185818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269572,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T18:09:08.465731029Z",
	            "FinishedAt": "2022-05-31T18:09:07.267063318Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f/ac8a0a6250b5af48bd9bc5391bd8d8b744ff034fb5698d0804498c4452ee136f-json.log",
	        "Name": "/embed-certs-20220531175604-6903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531175604-6903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531175604-6903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d-init/diff:/var/lib/docker/overlay2/dbf6491a01034b3a4072d24dd6a663bd548c011c92fc9b0fcfc3126c2363d90a/diff:/var/lib/docker/overlay2/3c1ed797a5ae56f028857fe150bb3099da77a482f0091d9606de67b91e4e652e/diff:/var/lib/docker/overlay2/d7e2932238179e26f1a5daf6042a83c249956f17b6ad005b7353231f416f2b7d/diff:/var/lib/docker/overlay2/0763fd45251825508da825d9b0bd4098a3a977f2882bb7b9cf670e5a7753ad22/diff:/var/lib/docker/overlay2/cb392086e47bfb43cc3594030734f488c0033e950afb3144711589d50b43efb1/diff:/var/lib/docker/overlay2/02c65c0264c1074acaa743ae260beed1b5ccc8f2206fefcb40c04a4d7c1e6abb/diff:/var/lib/docker/overlay2/c015a2834a1f8b21bc7d6428c323276e8503b566a1a0778aeda5bbcc654e5150/diff:/var/lib/docker/overlay2/a4839d6f1e0f5f3687c7690a9f8ca166859921020a566f1005de8d773f93c9ee/diff:/var/lib/docker/overlay2/a18963339ac1f1f8e0e4d547f6368687fa73687b01f44934d8d21d6bb7182ecb/diff:/var/lib/docker/overlay2/544294
1f36934a20c47a26bdd0ac0e178cba199b7209a342da76226d1cad976e/diff:/var/lib/docker/overlay2/0c8d0772e1c24cf00ab3f0d27eff1ba7bd4a027ed4cd0afb9e8fdea875294a64/diff:/var/lib/docker/overlay2/c1c21a60ed745936a48a150c52e0e18f7571835bdc592cecc30ac9747bbf8509/diff:/var/lib/docker/overlay2/cfa8da436eb3d4e036f5e1007bf99f292abe7a12f24aaa0eec49081a278aac16/diff:/var/lib/docker/overlay2/5d099359d29d33d2ce53db8a4b7807825d452e40dfa295fed5677f6f16281a56/diff:/var/lib/docker/overlay2/6aaf0114daed8e0327994de1a0c1d11e8b320ddcc0dea8f894ec704da03ca575/diff:/var/lib/docker/overlay2/12acd7f09547efb67f194a93ddd363d3a3cff4627e139182fd99642ac8ef9ec8/diff:/var/lib/docker/overlay2/a27e412f7c4bc23da58ac979621836b3095478f1539850a335b8fee949a12d03/diff:/var/lib/docker/overlay2/a7ddb23a0bd20711914d3aee5f5732dcbe5917e4ec012c8a2757c43f47adeb2a/diff:/var/lib/docker/overlay2/ae8e8e269a08beb65abd79bdd9835868c07fca69aaeeb6beaf62ba0d7d4b080a/diff:/var/lib/docker/overlay2/d540eef68cf238b1991ad0eafe005ad88e868d96baa74f2aa71dfa70b6fd6ed2/diff:/var/lib/d
ocker/overlay2/4448a9c22cb5f586f9b7b4f3541723375bd97ecec0a376b1f8de78174a5a10d3/diff:/var/lib/docker/overlay2/4d1a3fab003fbb6244184b096d7832cfa556b4f997ec47e2fbee5dfd3faddd5e/diff:/var/lib/docker/overlay2/c5393477c78e84d03c90d6aeaa923dc9e1cf62d0d6c8f9e4718724e8a5dfbfbe/diff:/var/lib/docker/overlay2/21465f8eae1df6dd32df5faf99467261c4eac5e53ff2d5403086abf8ecc0cbe8/diff:/var/lib/docker/overlay2/ec04c9562a86a83f018c54450466a80feef26b35e264b27ba678d16c05df6f02/diff:/var/lib/docker/overlay2/73321fbd1379f750f41c8b6d43a0440c73d50e454b6832031a4a557aba4a2151/diff:/var/lib/docker/overlay2/37620ca64108224d5ab7f36f75b59daf3030d31bc08bedb2f72444ad994f8897/diff:/var/lib/docker/overlay2/98fdb99bcc9553d91004f45c575394e1b41ac3b2ef4f7f494dcdd6abaff840df/diff:/var/lib/docker/overlay2/287665b5be538ae2cfb4364712aadf269c71e3bebab15d0a12e23f5e6ee3361f/diff:/var/lib/docker/overlay2/dfb789ddaed5c96617b17cd405ce280d27274d814d2ad7a19b31bb24652c85e8/diff:/var/lib/docker/overlay2/aef3ed50d8d11ab303f48ad7db430be8a828ecf859f6a8c7dd1566ae990
471b9/diff:/var/lib/docker/overlay2/06d2fb8d22f3b74e7f3d00f9be6f164a887dd07c369aae93adcd3e85702d58e9/diff:/var/lib/docker/overlay2/d2051e9d4b386a53413fbcb576491cc873e980c41359a8b1478b0570cedc5fa4/diff:/var/lib/docker/overlay2/a57a33d3d32c64090ddf0e940d15f17b2004b7684b7217f20c4e6fe40d800511/diff:/var/lib/docker/overlay2/0c845d270b5e10db5f6f8502a3da03e638e93f20934f7394d8c015cf1c36d10b/diff:/var/lib/docker/overlay2/25ac4fb3f8b980e89552c897c1e2d234e6d6d9c5ae5feba1ac68ba44f6b2086d/diff:/var/lib/docker/overlay2/80ed78d69449aee5edc61c862b4e044d7764c1acb066e313e19e7224939e86b0/diff:/var/lib/docker/overlay2/947fc021ab4c7e70f575f85daf2c27b55dd6b46c5fdbde65fb30ba0e845244c5/diff:/var/lib/docker/overlay2/74aa543aa1786d425dcdf395a15ba9be0a9e731d278c49123d5344465d3052ee/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83e551f0b93b58183af3b32c942e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e6f9249faec4ff5e71bd8d98dbdb7161470ee2140340a891def9ccd43e6ee66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531175604-6903",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531175604-6903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531175604-6903",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531175604-6903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad7aac916bee030844fa0e7c143e28fc250c0ad6f17da5b84d68ccafe87eb665",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ad7aac916bee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531175604-6903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac8a0a6250b5",
	                        "embed-certs-20220531175604-6903"
	                    ],
	                    "NetworkID": "810e286ea2469d855f00ec56445da0705b1ca1a44b439a6e099264f06730a27d",
	                    "EndpointID": "22e5779a5560b488e880e110e17956fdd53eadbe6443a536098d446845659c35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220531175604-6903 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | newest-cni-20220531175602-6903                    | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                    |                                                |         |                |                     |                     |
	| delete  | -p                                                | newest-cni-20220531175602-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:01 UTC | 31 May 22 18:01 UTC |
	|         | newest-cni-20220531175602-6903                    |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:06 UTC | 31 May 22 18:06 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:07 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:07 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:08 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:08 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:09 UTC | 31 May 22 18:09 UTC |
	|         | embed-certs-20220531175604-6903                   |                                                |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:15 UTC | 31 May 22 18:15 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:17 UTC | 31 May 22 18:17 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | embed-certs-20220531175604-6903                   | embed-certs-20220531175604-6903                | jenkins | v1.26.0-beta.1 | 31 May 22 18:18 UTC | 31 May 22 18:18 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| logs    | no-preload-20220531175323-6903                    | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:24 UTC | 31 May 22 18:24 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220531175323-6903                 | jenkins | v1.26.0-beta.1 | 31 May 22 18:24 UTC | 31 May 22 18:24 UTC |
	|         | no-preload-20220531175323-6903                    |                                                |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220531175509-6903    | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:26 UTC | 31 May 22 18:26 UTC |
	|         | logs -n 25                                        |                                                |         |                |                     |                     |
	| delete  | -p                                                | default-k8s-different-port-20220531175509-6903 | jenkins | v1.26.0-beta.1 | 31 May 22 18:26 UTC | 31 May 22 18:26 UTC |
	|         | default-k8s-different-port-20220531175509-6903    |                                                |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 18:09:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:09:07.772021  269289 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:09:07.772181  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772198  269289 out.go:309] Setting ErrFile to fd 2...
	I0531 18:09:07.772206  269289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:09:07.772308  269289 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 18:09:07.772576  269289 out.go:303] Setting JSON to false
	I0531 18:09:07.774031  269289 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6699,"bootTime":1654013849,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:09:07.774104  269289 start.go:125] virtualization: kvm guest
	I0531 18:09:07.776455  269289 out.go:177] * [embed-certs-20220531175604-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:09:07.777955  269289 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 18:09:07.777894  269289 notify.go:193] Checking for updates...
	I0531 18:09:07.779456  269289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:09:07.780983  269289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:07.782421  269289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 18:09:07.783767  269289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:09:07.785508  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:07.786063  269289 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 18:09:07.823614  269289 docker.go:137] docker version: linux-20.10.16
	I0531 18:09:07.823700  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:07.921312  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.851605066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:07.921419  269289 docker.go:254] overlay module found
	I0531 18:09:07.923729  269289 out.go:177] * Using the docker driver based on existing profile
	I0531 18:09:07.925091  269289 start.go:284] selected driver: docker
	I0531 18:09:07.925101  269289 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:07.925198  269289 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:09:07.926037  269289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:09:08.026136  269289 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 18:09:07.954605342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 18:09:08.026404  269289 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:09:08.026427  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:08.026434  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:08.026459  269289 start_flags.go:306] config:
	{Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:08.029735  269289 out.go:177] * Starting control plane node embed-certs-20220531175604-6903 in cluster embed-certs-20220531175604-6903
	I0531 18:09:08.031102  269289 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 18:09:08.032588  269289 out.go:177] * Pulling base image ...
	I0531 18:09:08.033842  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:08.033877  269289 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 18:09:08.033887  269289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0531 18:09:08.033901  269289 cache.go:57] Caching tarball of preloaded images
	I0531 18:09:08.034419  269289 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 18:09:08.034462  269289 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0531 18:09:08.034678  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.080805  269289 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 18:09:08.080832  269289 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 18:09:08.080845  269289 cache.go:206] Successfully downloaded all kic artifacts
	I0531 18:09:08.080882  269289 start.go:352] acquiring machines lock for embed-certs-20220531175604-6903: {Name:mk429de72637f09b98b2265dcb2e061fa2d9b440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:09:08.080970  269289 start.go:356] acquired machines lock for "embed-certs-20220531175604-6903" in 68.005µs
	I0531 18:09:08.080987  269289 start.go:94] Skipping create...Using existing machine configuration
	I0531 18:09:08.080995  269289 fix.go:55] fixHost starting: 
	I0531 18:09:08.081277  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.111663  269289 fix.go:103] recreateIfNeeded on embed-certs-20220531175604-6903: state=Stopped err=<nil>
	W0531 18:09:08.111695  269289 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 18:09:08.114165  269289 out.go:177] * Restarting existing docker container for "embed-certs-20220531175604-6903" ...
	I0531 18:09:03.375777  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:05.375862  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.875726  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:07.204868  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:09.704483  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:11.704965  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:08.115697  269289 cli_runner.go:164] Run: docker start embed-certs-20220531175604-6903
	I0531 18:09:08.473488  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:09:08.506356  269289 kic.go:416] container "embed-certs-20220531175604-6903" state is running.
	I0531 18:09:08.506679  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:08.539365  269289 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/config.json ...
	I0531 18:09:08.539556  269289 machine.go:88] provisioning docker machine ...
	I0531 18:09:08.539577  269289 ubuntu.go:169] provisioning hostname "embed-certs-20220531175604-6903"
	I0531 18:09:08.539613  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:08.571324  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:08.571535  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:08.571555  269289 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531175604-6903 && echo "embed-certs-20220531175604-6903" | sudo tee /etc/hostname
	I0531 18:09:08.572224  269289 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40978->127.0.0.1:49442: read: connection reset by peer
	I0531 18:09:11.691333  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531175604-6903
	
	I0531 18:09:11.691423  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:11.724149  269289 main.go:134] libmachine: Using SSH client type: native
	I0531 18:09:11.724284  269289 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49442 <nil> <nil>}
	I0531 18:09:11.724303  269289 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531175604-6903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531175604-6903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531175604-6903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:09:11.834529  269289 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:09:11.834559  269289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
	I0531 18:09:11.834578  269289 ubuntu.go:177] setting up certificates
	I0531 18:09:11.834589  269289 provision.go:83] configureAuth start
	I0531 18:09:11.834632  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:11.865871  269289 provision.go:138] copyHostCerts
	I0531 18:09:11.865924  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
	I0531 18:09:11.865932  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
	I0531 18:09:11.865982  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
	I0531 18:09:11.866066  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
	I0531 18:09:11.866081  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
	I0531 18:09:11.866104  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
	I0531 18:09:11.866152  269289 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
	I0531 18:09:11.866160  269289 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
	I0531 18:09:11.866179  269289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
	I0531 18:09:11.866227  269289 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220531175604-6903 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531175604-6903]
	I0531 18:09:12.090438  269289 provision.go:172] copyRemoteCerts
	I0531 18:09:12.090495  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:09:12.090534  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.123286  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.206789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:09:12.223789  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:09:12.239992  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 18:09:12.256268  269289 provision.go:86] duration metric: configureAuth took 421.669237ms
	I0531 18:09:12.256294  269289 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:09:12.256475  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:09:12.256491  269289 machine.go:91] provisioned docker machine in 3.716920297s
	I0531 18:09:12.256499  269289 start.go:306] post-start starting for "embed-certs-20220531175604-6903" (driver="docker")
	I0531 18:09:12.256507  269289 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:09:12.256545  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:09:12.256574  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.289898  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.374015  269289 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:09:12.376709  269289 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:09:12.376729  269289 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:09:12.376738  269289 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:09:12.376744  269289 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 18:09:12.376754  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
	I0531 18:09:12.376804  269289 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
	I0531 18:09:12.376870  269289 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
	I0531 18:09:12.376948  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:09:12.383195  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:12.399428  269289 start.go:309] post-start completed in 142.913406ms
	I0531 18:09:12.399507  269289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:09:12.399549  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.432219  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.515046  269289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:09:12.518751  269289 fix.go:57] fixHost completed within 4.437752156s
	I0531 18:09:12.518774  269289 start.go:81] releasing machines lock for "embed-certs-20220531175604-6903", held for 4.437792479s
	I0531 18:09:12.518855  269289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531175604-6903
	I0531 18:09:12.550553  269289 ssh_runner.go:195] Run: systemctl --version
	I0531 18:09:12.550601  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.550641  269289 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 18:09:12.550697  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:09:12.582421  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.582872  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:09:12.659019  269289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0531 18:09:12.679913  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 18:09:12.688643  269289 docker.go:187] disabling docker service ...
	I0531 18:09:12.688702  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:09:12.697529  269289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:09:12.706100  269289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:09:09.875798  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.375479  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:13.705008  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.205471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:12.785582  269289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:09:12.855815  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:09:12.864020  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:09:12.875934  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0531 18:09:12.888218  269289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:09:12.894177  269289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:09:12.900048  269289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:09:12.967246  269289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0531 18:09:13.034030  269289 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0531 18:09:13.034097  269289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0531 18:09:13.037697  269289 start.go:468] Will wait 60s for crictl version
	I0531 18:09:13.037755  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:13.061656  269289 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-05-31T18:09:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0531 18:09:14.375758  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:16.375804  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.205548  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.207254  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:18.375866  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:20.875902  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.108851  269289 ssh_runner.go:195] Run: sudo crictl version
	I0531 18:09:24.132832  269289 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.4
	RuntimeApiVersion:  v1alpha2
	I0531 18:09:24.132887  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.159676  269289 ssh_runner.go:195] Run: containerd --version
	I0531 18:09:24.189925  269289 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0531 18:09:24.191315  269289 cli_runner.go:164] Run: docker network inspect embed-certs-20220531175604-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:09:24.222603  269289 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:09:24.225783  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.236610  269289 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0531 18:09:22.705143  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.205093  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:24.237875  269289 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0531 18:09:24.237934  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.260983  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.261003  269289 containerd.go:521] Images already preloaded, skipping extraction
	I0531 18:09:24.261042  269289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:09:24.282964  269289 containerd.go:607] all images are preloaded for containerd runtime.
	I0531 18:09:24.282986  269289 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:09:24.283024  269289 ssh_runner.go:195] Run: sudo crictl info
	I0531 18:09:24.304408  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:24.304433  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:24.304447  269289 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:09:24.304466  269289 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531175604-6903 NodeName:embed-certs-20220531175604-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 18:09:24.304647  269289 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220531175604-6903"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:09:24.304770  269289 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220531175604-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:09:24.304828  269289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 18:09:24.311429  269289 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:09:24.311492  269289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:09:24.317597  269289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (576 bytes)
	I0531 18:09:24.329144  269289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:09:24.340465  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0531 18:09:24.352005  269289 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:09:24.354594  269289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:09:24.362803  269289 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903 for IP: 192.168.49.2
	I0531 18:09:24.362883  269289 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
	I0531 18:09:24.362917  269289 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
	I0531 18:09:24.362978  269289 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/client.key
	I0531 18:09:24.363028  269289 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key.dd3b5fb2
	I0531 18:09:24.363065  269289 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key
	I0531 18:09:24.363186  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
	W0531 18:09:24.363227  269289 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
	I0531 18:09:24.363235  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:09:24.363261  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:09:24.363280  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:09:24.363304  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
	I0531 18:09:24.363343  269289 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
	I0531 18:09:24.363895  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:09:24.379919  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:09:24.395597  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:09:24.411294  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/embed-certs-20220531175604-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:09:24.426758  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:09:24.442477  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:09:24.458117  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:09:24.473675  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 18:09:24.489258  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:09:24.504896  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
	I0531 18:09:24.520499  269289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
	I0531 18:09:24.536081  269289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 18:09:24.547693  269289 ssh_runner.go:195] Run: openssl version
	I0531 18:09:24.552235  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:09:24.559008  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561937  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.561976  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:09:24.566453  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:09:24.572576  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
	I0531 18:09:24.579358  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582018  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.582059  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
	I0531 18:09:24.586386  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
	I0531 18:09:24.592600  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
	I0531 18:09:24.599203  269289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.601990  269289 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.602019  269289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
	I0531 18:09:24.606281  269289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:09:24.612350  269289 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531175604-6903 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531175604-6903 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 18:09:24.612448  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0531 18:09:24.612477  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:24.635020  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:24.635044  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:24.635056  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:24.635066  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:24.635074  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:24.635089  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:24.635098  269289 cri.go:87] found id: ""
	I0531 18:09:24.635135  269289 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0531 18:09:24.646413  269289 cri.go:114] JSON = null
	W0531 18:09:24.646451  269289 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0531 18:09:24.646485  269289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:09:24.652823  269289 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 18:09:24.652845  269289 kubeadm.go:626] restartCluster start
	I0531 18:09:24.652871  269289 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 18:09:24.658784  269289 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.659393  269289 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531175604-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:09:24.659677  269289 kubeconfig.go:127] "embed-certs-20220531175604-6903" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig - will repair!
	I0531 18:09:24.660165  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:09:24.661391  269289 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 18:09:24.667472  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.667510  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.674690  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:24.875005  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:24.875078  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:24.883754  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.075164  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.075227  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.083571  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.275792  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.275862  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.284084  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.475412  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.475492  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.483769  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.675018  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.675093  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.683421  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:25.875700  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:25.875758  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:25.884127  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.075367  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.075441  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.083636  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.274875  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.274936  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.283259  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.475530  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.475600  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.483831  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.675186  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.675264  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.683305  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:26.875527  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:26.875591  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:26.883712  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.074838  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.074911  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.082943  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.275201  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.275279  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.283352  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.475732  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.475810  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.484027  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.675351  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.675423  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.683562  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.683579  269289 api_server.go:165] Checking apiserver status ...
	I0531 18:09:27.683610  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 18:09:27.690956  269289 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.690974  269289 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 18:09:27.690980  269289 kubeadm.go:1092] stopping kube-system containers ...
	I0531 18:09:27.690988  269289 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0531 18:09:27.691025  269289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:09:27.714735  269289 cri.go:87] found id: "1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028"
	I0531 18:09:27.714760  269289 cri.go:87] found id: "2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843"
	I0531 18:09:27.714770  269289 cri.go:87] found id: "bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808"
	I0531 18:09:27.714779  269289 cri.go:87] found id: "93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b"
	I0531 18:09:27.714788  269289 cri.go:87] found id: "8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e"
	I0531 18:09:27.714799  269289 cri.go:87] found id: "55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad"
	I0531 18:09:27.714808  269289 cri.go:87] found id: ""
	I0531 18:09:27.714813  269289 cri.go:232] Stopping containers: [1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad]
	I0531 18:09:27.714851  269289 ssh_runner.go:195] Run: which crictl
	I0531 18:09:27.717532  269289 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1c61a8e4e6919ded5a883b59ca0a43d23aa3b5ba1c7d140c84d4976701ca9028 2dd4c6e62c84850e1f584f33592a98375b49580bb6176e245f22d2d5f62d1843 bce895f043845f92717aed3d193f83637787d1e31f3ed85239aaf05aeac64808 93653e4eba8add50bc672103a0baf79faacc3ec78c75a138be82a5df4af2a32b 8878d3b54661fe34732e1514eb2fd5a4686a1e68fff6d3d75d42cdfa05313e5e 55beac89e187612650910b72071b003f22f8aaf423d32ede60de405981c3a7ad
	I0531 18:09:27.740922  269289 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 18:09:27.750082  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:09:27.756789  269289 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 17:56 /etc/kubernetes/scheduler.conf
	
	I0531 18:09:27.756825  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:09:27.763097  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:09:27.769330  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:09:23.375648  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:25.375756  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.875533  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.704608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:29.705088  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:27.775774  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.775824  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:09:27.782146  269289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:09:27.788206  269289 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:09:27.788249  269289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:09:27.794183  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800354  269289 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 18:09:27.800371  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:27.840426  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.483103  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.611279  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.659562  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:28.717430  269289 api_server.go:51] waiting for apiserver process to appear ...
	I0531 18:09:28.717495  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.225924  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:29.725826  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.225776  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.725310  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.225503  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:31.725612  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.225964  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:32.726000  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:30.375823  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.875541  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:32.205237  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:34.205608  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:36.704946  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:33.225375  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:33.726285  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.225275  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.725662  269289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:09:34.736940  269289 api_server.go:71] duration metric: took 6.019510906s to wait for apiserver process to appear ...
	I0531 18:09:34.736971  269289 api_server.go:87] waiting for apiserver healthz status ...
	I0531 18:09:34.736983  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:34.737332  269289 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0531 18:09:35.238095  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:35.375635  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:37.378182  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:38.110402  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 18:09:38.110472  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 18:09:38.237764  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.306327  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.306358  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:38.737926  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:38.744239  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:38.744265  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.237517  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.242116  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 18:09:39.242143  269289 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 18:09:39.738349  269289 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:09:39.743589  269289 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:09:39.749147  269289 api_server.go:140] control plane version: v1.23.6
	I0531 18:09:39.749166  269289 api_server.go:130] duration metric: took 5.012189517s to wait for apiserver health ...
	I0531 18:09:39.749173  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:09:39.749179  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:09:39.751213  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:09:38.705223  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:41.205391  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.752701  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:09:39.818871  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:09:39.818891  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:09:39.834366  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:09:40.442544  269289 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:09:40.449925  269289 system_pods.go:59] 9 kube-system pods found
	I0531 18:09:40.449965  269289 system_pods.go:61] "coredns-64897985d-w2s2k" [272d1735-077b-4af0-a2fa-5b0f85c8e4fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.449977  269289 system_pods.go:61] "etcd-embed-certs-20220531175604-6903" [73942f36-bfae-49e5-87fb-43820d0182cc] Running
	I0531 18:09:40.449988  269289 system_pods.go:61] "kindnet-jrlsl" [c6f8d506-b373-4e9b-8fd9-1bcfd3f0172c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0531 18:09:40.449999  269289 system_pods.go:61] "kube-apiserver-embed-certs-20220531175604-6903" [ecfb98e0-1317-4251-b80f-93138d93521a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 18:09:40.450014  269289 system_pods.go:61] "kube-controller-manager-embed-certs-20220531175604-6903" [f655b743-bb70-49d9-ab57-6575a05ae6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 18:09:40.450024  269289 system_pods.go:61] "kube-proxy-nvktf" [a3c917bd-93a0-40b6-85c5-7ea637a1aaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 18:09:40.450039  269289 system_pods.go:61] "kube-scheduler-embed-certs-20220531175604-6903" [7ff61926-9d06-4412-b29e-a3b958ae7fdb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 18:09:40.450056  269289 system_pods.go:61] "metrics-server-b955d9d8-5hcnq" [bc956dda-3db9-478e-8fe9-4f4375857a12] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450067  269289 system_pods.go:61] "storage-provisioner" [d2e35bf4-3bfa-46e5-91c7-70a4f1ab348a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0531 18:09:40.450075  269289 system_pods.go:74] duration metric: took 7.509225ms to wait for pod list to return data ...
	I0531 18:09:40.450087  269289 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:09:40.452551  269289 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0531 18:09:40.452581  269289 node_conditions.go:123] node cpu capacity is 8
	I0531 18:09:40.452591  269289 node_conditions.go:105] duration metric: took 2.4994ms to run NodePressure ...
	I0531 18:09:40.452605  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 18:09:40.569376  269289 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573483  269289 kubeadm.go:777] kubelet initialised
	I0531 18:09:40.573510  269289 kubeadm.go:778] duration metric: took 4.104573ms waiting for restarted kubelet to initialise ...
	I0531 18:09:40.573518  269289 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:09:40.580067  269289 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	I0531 18:09:42.585321  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:39.876171  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:42.376043  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:43.705074  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:45.705136  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.586120  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:47.085707  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:44.875548  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:46.876233  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:48.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:50.206057  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.585053  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.585603  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:49.375708  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:51.875940  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:52.705353  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.705507  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:54.084883  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.085413  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:53.876111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:56.375796  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:57.205002  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:59.704756  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.085579  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.584952  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.585345  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:09:58.875791  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:00.876220  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:02.205444  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.205856  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:06.704493  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:04.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:07.085004  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:03.375587  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:05.876410  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.705331  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:11.205277  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:09.585758  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.085534  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:08.375300  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:10.375906  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:12.875744  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:13.205338  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:15.704598  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.085723  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:16.085916  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:14.876170  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.376062  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:17.705091  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.205306  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:18.584998  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:20.585431  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:19.875325  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:21.875824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:22.704807  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.205734  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.085208  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:25.585032  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.585926  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:23.876246  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:26.375824  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:27.704471  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.205614  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.085645  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:28.375853  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:30.875828  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.876426  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:32.205748  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.705114  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:34.586123  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.084913  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:35.376094  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.376934  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:37.204855  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.205282  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.704795  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.585507  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:42.084955  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:39.875448  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:41.875776  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:43.704835  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:45.705043  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.085118  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.085542  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:44.375440  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:46.876272  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.205603  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:50.704404  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:48.586090  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.085610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:49.376052  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:51.876540  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:52.705054  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:55.205228  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:53.585839  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.084986  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:54.376090  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:56.875598  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:57.704861  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.205848  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.085315  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.085770  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.585759  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:10:58.876074  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:00.876394  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:02.704985  261225 pod_ready.go:102] pod "coredns-64897985d-8cptk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:54:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.702287  261225 pod_ready.go:81] duration metric: took 4m0.002387032s waiting for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" ...
	E0531 18:11:03.702314  261225 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-8cptk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:11:03.702336  261225 pod_ready.go:38] duration metric: took 4m0.006822044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:11:03.702359  261225 kubeadm.go:630] restartCluster took 4m15.094217425s
	W0531 18:11:03.702487  261225 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:11:03.702514  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:11:05.343353  261225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.640813037s)
	I0531 18:11:05.343437  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:05.352859  261225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:11:05.359859  261225 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:11:05.359907  261225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:11:05.366334  261225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:11:05.366377  261225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:11:04.587602  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.085341  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:03.375499  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:05.375669  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:07.375797  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.585588  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:12.086457  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:09.376111  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:11.875588  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:14.586248  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:17.085311  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:13.875622  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:16.375856  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.709173  261225 out.go:204]   - Generating certificates and keys ...
	I0531 18:11:18.711907  261225 out.go:204]   - Booting up control plane ...
	I0531 18:11:18.714606  261225 out.go:204]   - Configuring RBAC rules ...
	I0531 18:11:18.716465  261225 cni.go:95] Creating CNI manager for ""
	I0531 18:11:18.716486  261225 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:11:18.717949  261225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:11:18.719198  261225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:11:18.722600  261225 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:11:18.722616  261225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:11:18.735057  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:11:19.350353  261225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:11:19.350404  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.350427  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=no-preload-20220531175323-6903 minikube.k8s.io/updated_at=2022_05_31T18_11_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.430085  261225 ops.go:34] apiserver oom_adj: -16
	I0531 18:11:19.430086  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.983488  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.483888  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:20.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:21.483016  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:19.087921  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.585530  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:18.876428  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.375897  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:21.983544  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.482945  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:22.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.483551  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.983592  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.483659  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:24.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.483167  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:25.982981  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:26.483682  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:23.585627  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.585786  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.585848  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:23.375949  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:25.375976  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:27.875813  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:26.983223  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.483242  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:27.983137  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.483188  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:28.983741  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.483879  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:29.983081  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.483570  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:30.982889  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.483729  261225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:11:31.537008  261225 kubeadm.go:1045] duration metric: took 12.186656817s to wait for elevateKubeSystemPrivileges.
	I0531 18:11:31.537033  261225 kubeadm.go:397] StartCluster complete in 4m42.970207425s
	I0531 18:11:31.537049  261225 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:31.537139  261225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:11:31.538101  261225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:11:32.051475  261225 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220531175323-6903" rescaled to 1
	I0531 18:11:32.051538  261225 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:11:32.053328  261225 out.go:177] * Verifying Kubernetes components...
	I0531 18:11:32.051603  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:11:32.051631  261225 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:11:32.051839  261225 config.go:178] Loaded profile config "no-preload-20220531175323-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:11:32.054610  261225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:11:32.054630  261225 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054636  261225 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054661  261225 addons.go:65] Setting dashboard=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054667  261225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220531175323-6903"
	I0531 18:11:32.054666  261225 addons.go:65] Setting metrics-server=true in profile "no-preload-20220531175323-6903"
	I0531 18:11:32.054677  261225 addons.go:153] Setting addon dashboard=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054685  261225 addons.go:165] addon dashboard should already be in state true
	I0531 18:11:32.054694  261225 addons.go:153] Setting addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:32.054650  261225 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.054708  261225 addons.go:165] addon metrics-server should already be in state true
	W0531 18:11:32.054715  261225 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:11:32.054731  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054749  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.054752  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.055002  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055252  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055266  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.055256  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.065260  261225 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:11:32.103052  261225 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:11:32.107208  261225 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108480  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:11:32.108501  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:11:32.109942  261225 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:11:32.108562  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.111742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:11:32.111790  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:11:32.111836  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.114237  261225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:11:30.085797  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.086472  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:30.376185  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.875986  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:32.115989  261225 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.116015  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:11:32.116006  261225 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220531175323-6903"
	W0531 18:11:32.116032  261225 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:11:32.116057  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.116079  261225 host.go:66] Checking if "no-preload-20220531175323-6903" exists ...
	I0531 18:11:32.116746  261225 cli_runner.go:164] Run: docker container inspect no-preload-20220531175323-6903 --format={{.State.Status}}
	I0531 18:11:32.154692  261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:11:32.172159  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176336  261225 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.176359  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:11:32.176359  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.176412  261225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220531175323-6903
	I0531 18:11:32.178381  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.214197  261225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/no-preload-20220531175323-6903/id_rsa Username:docker}
	I0531 18:11:32.405898  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:11:32.405927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:11:32.418989  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:11:32.420809  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:11:32.421818  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:11:32.421841  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:11:32.427921  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:11:32.427945  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:11:32.502461  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:11:32.502491  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:11:32.513965  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:11:32.513988  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:11:32.519888  261225 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.519911  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:11:32.534254  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:11:32.534276  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:11:32.613065  261225 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0531 18:11:32.613879  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:11:32.624901  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:11:32.624927  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:11:32.718626  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:11:32.718699  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:11:32.811670  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:11:32.811701  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:11:32.908742  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:11:32.908771  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:11:33.007964  261225 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.007996  261225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:11:33.107854  261225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:11:33.513893  261225 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220531175323-6903"
	I0531 18:11:34.072421  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.312128  261225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.204221815s)
	I0531 18:11:34.314039  261225 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:11:34.315350  261225 addons.go:417] enableAddons completed in 2.263731051s
	I0531 18:11:36.571090  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:34.585378  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.085134  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:35.375998  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:37.876022  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:38.571611  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:40.571791  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:39.585652  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.085745  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:40.375543  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.375912  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:42.571860  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:45.071529  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:44.585489  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.084675  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:44.875655  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:46.876121  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:47.072141  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.570942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:51.571718  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:49.085124  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.585600  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:48.876234  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:51.375464  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:54.071361  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:56.072083  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:53.585630  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.085163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:53.876096  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:56.375046  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.571942  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:01.071618  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:11:58.585559  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:01.085890  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:11:58.376093  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:00.876187  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:02.876231  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:03.072143  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:05.570848  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:03.086038  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.585743  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:05.375789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.875852  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:07.570951  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:09.571526  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:11.571905  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:08.085268  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.585201  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:10.375245  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:12.376080  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.071606  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:16.072126  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:13.085556  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:15.085642  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.585172  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:14.875789  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:17.375091  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:18.571244  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:20.572070  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:19.585882  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:22.085420  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:19.375353  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:21.376109  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.071952  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:25.571538  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:24.586045  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.586231  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:23.876833  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:26.375751  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:27.571821  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.571882  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:29.085641  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:31.085685  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:28.875672  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:30.875820  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.876354  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:32.071187  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:34.071420  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:36.071564  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:33.585013  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.585739  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:35.375221  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:37.376282  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:38.570968  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:40.571348  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:38.085427  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.585163  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:42.585706  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:39.875351  265084 pod_ready.go:102] pod "coredns-64897985d-92zgx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:55:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:40.873471  265084 pod_ready.go:81] duration metric: took 4m0.002908121s waiting for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" ...
	E0531 18:12:40.873493  265084 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-92zgx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:12:40.873522  265084 pod_ready.go:38] duration metric: took 4m0.007756787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:12:40.873544  265084 kubeadm.go:630] restartCluster took 4m15.709568906s
	W0531 18:12:40.873671  265084 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:12:40.873698  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:12:42.413886  265084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.540162536s)
	I0531 18:12:42.413945  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:12:42.423107  265084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:12:42.429872  265084 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:12:42.429912  265084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:12:42.436297  265084 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:12:42.436331  265084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:12:42.571552  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.072252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:45.084735  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.085284  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:47.570931  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.571608  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:51.571648  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:49.584898  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:51.586112  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.656930  265084 out.go:204]   - Generating certificates and keys ...
	I0531 18:12:55.659590  265084 out.go:204]   - Booting up control plane ...
	I0531 18:12:55.662052  265084 out.go:204]   - Configuring RBAC rules ...
	I0531 18:12:55.664008  265084 cni.go:95] Creating CNI manager for ""
	I0531 18:12:55.664023  265084 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:12:55.665448  265084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:12:54.071703  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:56.071909  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:54.085829  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:56.584911  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:55.666615  265084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:12:55.670087  265084 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:12:55.670101  265084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:12:55.683282  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:12:56.287125  265084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:12:56.287250  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.287269  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531175509-6903 minikube.k8s.io/updated_at=2022_05_31T18_12_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.294761  265084 ops.go:34] apiserver oom_adj: -16
	I0531 18:12:56.356555  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:56.931369  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.430985  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:57.930763  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.072370  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:00.571645  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:12:58.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:00.585876  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:12:58.431243  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:58.930845  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.431397  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:12:59.931568  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.431233  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:00.930831  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.430783  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:01.931582  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.431559  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:02.931164  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.072253  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:05.571252  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:03.085192  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:05.585726  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:03.431622  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:03.931432  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.431651  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:04.931602  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.431254  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:05.931669  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.431587  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:06.930870  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.431379  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:07.931781  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.431738  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.931001  265084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:08.988549  265084 kubeadm.go:1045] duration metric: took 12.701350416s to wait for elevateKubeSystemPrivileges.
	I0531 18:13:08.988585  265084 kubeadm.go:397] StartCluster complete in 4m43.864893986s
	I0531 18:13:08.988604  265084 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:08.988717  265084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:13:08.989847  265084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:13:09.508027  265084 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531175509-6903" rescaled to 1
	I0531 18:13:09.508095  265084 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:13:09.510016  265084 out.go:177] * Verifying Kubernetes components...
	I0531 18:13:09.508139  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:13:09.508164  265084 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:13:09.508352  265084 config.go:178] Loaded profile config "default-k8s-different-port-20220531175509-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:13:09.511398  265084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:09.511420  265084 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511435  265084 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511445  265084 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511451  265084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511458  265084 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:13:09.511464  265084 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511494  265084 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511509  265084 addons.go:165] addon metrics-server should already be in state true
	I0531 18:13:09.511510  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511560  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511421  265084 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:09.511623  265084 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.511642  265084 addons.go:165] addon dashboard should already be in state true
	I0531 18:13:09.511686  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.511794  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512032  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512055  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.512135  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.524059  265084 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:13:09.564034  265084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:13:09.565498  265084 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.565568  265084 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.566183  265084 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531175509-6903"
	W0531 18:13:09.566993  265084 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:13:09.566996  265084 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:13:09.568335  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:13:09.568357  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:13:09.567020  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:13:09.568408  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.568430  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.567022  265084 host.go:66] Checking if "default-k8s-different-port-20220531175509-6903" exists ...
	I0531 18:13:09.569999  265084 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:13:09.568977  265084 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531175509-6903 --format={{.State.Status}}
	I0531 18:13:09.571342  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:13:09.571364  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:13:09.571420  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.602943  265084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:13:09.610124  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.617756  265084 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.617783  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:13:09.617847  265084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531175509-6903
	I0531 18:13:09.619119  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.626594  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.663799  265084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/default-k8s-different-port-20220531175509-6903/id_rsa Username:docker}
	I0531 18:13:09.809728  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:13:09.809753  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:13:09.810223  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:13:09.810246  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:13:09.816960  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:13:09.817408  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:13:09.824732  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:13:09.824754  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:13:09.825129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:13:09.825148  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:13:09.838196  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:13:09.838214  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:13:09.906924  265084 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.906947  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:13:09.918653  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:13:09.918674  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:13:09.923798  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:13:09.931866  265084 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0531 18:13:10.002603  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:13:10.002630  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:13:10.020129  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:13:10.020167  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:13:10.106375  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:13:10.106399  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:13:10.123765  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:13:10.123794  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:13:10.144076  265084 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.144119  265084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:13:10.215134  265084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:13:10.622559  265084 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531175509-6903"
	I0531 18:13:11.339286  265084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.12406901s)
	I0531 18:13:11.341817  265084 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0531 18:13:07.571617  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:09.572391  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:08.085066  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:10.085481  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:12.585610  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:11.343086  265084 addons.go:417] enableAddons completed in 1.834929772s
	I0531 18:13:11.534064  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:12.071842  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:14.072366  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:16.571700  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:15.085503  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:17.585477  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:13.534515  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:15.534685  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.034548  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:18.571742  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:21.071863  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:20.085466  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:22.585412  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:20.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.033886  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:23.570885  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:25.571318  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:24.585439  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:27.085126  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:25.034013  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.534277  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:27.571734  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:30.071441  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:29.585497  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:32.084886  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:29.534310  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:31.534424  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:32.072003  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.571176  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:36.571707  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:34.085468  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:36.585785  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:33.534780  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:35.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.033558  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:38.571878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:41.071580  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:39.085376  269289 pod_ready.go:102] pod "coredns-64897985d-w2s2k" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 17:56:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0531 18:13:40.582100  269289 pod_ready.go:81] duration metric: took 4m0.001996579s waiting for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" ...
	E0531 18:13:40.582169  269289 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-w2s2k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 18:13:40.582215  269289 pod_ready.go:38] duration metric: took 4m0.00868413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:13:40.582275  269289 kubeadm.go:630] restartCluster took 4m15.929419982s
	W0531 18:13:40.582469  269289 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 18:13:40.582501  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0531 18:13:42.189588  269289 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.6070578s)
	I0531 18:13:42.189653  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:13:42.199068  269289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:13:42.205821  269289 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:13:42.205873  269289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:13:42.212208  269289 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:13:42.212266  269289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:13:40.033618  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:42.034150  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:43.072273  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:45.571260  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:44.534599  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:46.535205  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:48.071960  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:50.570994  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:49.034908  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:51.534484  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:56.088137  269289 out.go:204]   - Generating certificates and keys ...
	I0531 18:13:56.090627  269289 out.go:204]   - Booting up control plane ...
	I0531 18:13:56.093129  269289 out.go:204]   - Configuring RBAC rules ...
	I0531 18:13:56.094774  269289 cni.go:95] Creating CNI manager for ""
	I0531 18:13:56.094792  269289 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 18:13:56.096311  269289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:13:52.571414  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:55.071807  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:56.097594  269289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:13:56.101201  269289 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 18:13:56.101218  269289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0531 18:13:56.113794  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:13:56.748029  269289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:13:56.748093  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.748112  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=embed-certs-20220531175604-6903 minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.822037  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:56.825886  269289 ops.go:34] apiserver oom_adj: -16
	I0531 18:13:57.376176  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:53.534591  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:55.536859  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:58.033966  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:13:57.071988  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:59.571338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:01.573338  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:13:57.876331  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.376391  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:58.876462  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.376020  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:13:59.876318  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.375732  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.876322  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.376258  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:01.875649  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:02.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:00.534980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:02.535335  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:04.071798  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:06.571345  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:02.875698  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.376128  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:03.876242  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.376344  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:04.875652  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.375884  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.876374  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.375802  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:06.876594  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:07.376348  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:05.033642  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.534735  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:07.876085  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.376334  269289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:14:08.431068  269289 kubeadm.go:1045] duration metric: took 11.683021831s to wait for elevateKubeSystemPrivileges.
	I0531 18:14:08.431097  269289 kubeadm.go:397] StartCluster complete in 4m43.818756101s
	I0531 18:14:08.431119  269289 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.431248  269289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 18:14:08.432696  269289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:14:08.947002  269289 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220531175604-6903" rescaled to 1
	I0531 18:14:08.947054  269289 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0531 18:14:08.948675  269289 out.go:177] * Verifying Kubernetes components...
	I0531 18:14:08.947153  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:14:08.947168  269289 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0531 18:14:08.947347  269289 config.go:178] Loaded profile config "embed-certs-20220531175604-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 18:14:08.950015  269289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:14:08.950062  269289 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950074  269289 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950082  269289 addons.go:65] Setting dashboard=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950092  269289 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950100  269289 addons.go:165] addon metrics-server should already be in state true
	I0531 18:14:08.950148  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950083  269289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220531175604-6903"
	I0531 18:14:08.950101  269289 addons.go:153] Setting addon dashboard=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950271  269289 addons.go:165] addon dashboard should already be in state true
	I0531 18:14:08.950319  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950061  269289 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220531175604-6903"
	I0531 18:14:08.950364  269289 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.950378  269289 addons.go:165] addon storage-provisioner should already be in state true
	I0531 18:14:08.950417  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:08.950518  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950641  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950757  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.950872  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:08.963248  269289 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:14:08.996303  269289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:14:08.997754  269289 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:08.997776  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:14:08.997822  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:08.999414  269289 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220531175604-6903"
	W0531 18:14:08.999437  269289 addons.go:165] addon default-storageclass should already be in state true
	I0531 18:14:08.999466  269289 host.go:66] Checking if "embed-certs-20220531175604-6903" exists ...
	I0531 18:14:09.001079  269289 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 18:14:08.999831  269289 cli_runner.go:164] Run: docker container inspect embed-certs-20220531175604-6903 --format={{.State.Status}}
	I0531 18:14:09.003755  269289 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 18:14:09.002479  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:14:09.005292  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:14:09.005383  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.007089  269289 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 18:14:09.008807  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 18:14:09.008840  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 18:14:09.008896  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.039305  269289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:14:09.047368  269289 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.047395  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:14:09.047454  269289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531175604-6903
	I0531 18:14:09.050164  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.052536  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.061683  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.090288  269289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/embed-certs-20220531175604-6903/id_rsa Username:docker}
	I0531 18:14:09.160469  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:14:09.212314  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:14:09.212343  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 18:14:09.216077  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:14:09.217010  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 18:14:09.217029  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 18:14:09.227272  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:14:09.227295  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:14:09.232066  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 18:14:09.232089  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 18:14:09.314812  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 18:14:09.314893  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 18:14:09.315135  269289 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.315179  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:14:09.329470  269289 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0531 18:14:09.406176  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 18:14:09.406200  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 18:14:09.408128  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:14:09.429793  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 18:14:09.429823  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 18:14:09.516510  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 18:14:09.516537  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 18:14:09.530570  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 18:14:09.530597  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 18:14:09.612604  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 18:14:09.612631  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 18:14:09.631042  269289 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:09.631070  269289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 18:14:09.711865  269289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 18:14:10.228589  269289 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220531175604-6903"
	I0531 18:14:10.618859  269289 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 18:14:09.072442  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:11.571596  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:10.620376  269289 addons.go:417] enableAddons completed in 1.673239404s
	I0531 18:14:10.977314  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:09.534892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:12.034154  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:13.571853  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:16.071183  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:13.477113  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:15.477267  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:17.477503  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:14.034360  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:16.534633  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:18.071690  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:20.570878  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:19.976762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:22.476749  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:18.534988  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:21.033579  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:23.072062  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:25.571452  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:24.976557  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:26.977352  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:23.534867  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:26.034090  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:27.571576  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:30.072241  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:29.477126  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:31.976050  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:28.534857  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:31.033495  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:33.034149  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:32.571726  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:35.071520  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:33.977624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:36.477062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:35.034245  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.534397  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:37.072086  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:39.072305  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:41.571404  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:38.977021  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:41.477149  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:40.033940  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:42.534713  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:44.072401  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:46.571051  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:43.977151  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:46.476598  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:45.033654  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:47.033892  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:48.571836  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:51.071985  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:48.477002  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:50.477180  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:49.034062  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:51.534464  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:53.571426  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:55.571659  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:52.976837  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:55.476624  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:57.477076  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:54.033904  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:56.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:14:58.071447  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:00.072133  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:14:59.976445  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:01.976750  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:14:58.534998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:01.034476  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:02.072236  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.571269  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:06.571629  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:04.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:06.476291  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:03.534865  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:06.033980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:08.571738  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:11.071854  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:08.476832  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:10.976476  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:08.533980  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:10.534643  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.033474  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:13.072258  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:15.072469  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:12.977062  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.476762  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:15.534881  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:18.033601  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:17.570753  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:19.571375  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:21.571479  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:17.976899  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:19.977288  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:22.476772  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:20.534891  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.033328  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:23.571892  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:26.071396  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:24.477019  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:26.976517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:25.033998  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:27.534916  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:28.072042  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:30.571432  261225 node_ready.go:58] node "no-preload-20220531175323-6903" has status "Ready":"False"
	I0531 18:15:32.073761  261225 node_ready.go:38] duration metric: took 4m0.008462542s waiting for node "no-preload-20220531175323-6903" to be "Ready" ...
	I0531 18:15:32.075979  261225 out.go:177] 
	W0531 18:15:32.077511  261225 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:15:32.077526  261225 out.go:239] * 
	W0531 18:15:32.078185  261225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:15:32.080080  261225 out.go:177] 
	I0531 18:15:29.477285  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:31.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:30.033697  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:32.033898  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:33.977634  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:36.476328  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:34.034272  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:36.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:38.476673  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:40.477412  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:38.534463  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:40.534774  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:42.976241  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:44.977315  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:47.476536  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:45.034265  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:47.534278  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:49.477384  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:51.976596  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:49.534496  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:51.534911  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:54.476365  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:56.477128  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:54.033999  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:56.534929  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:15:58.976541  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:00.976604  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:15:58.535059  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:01.033371  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:03.033446  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:02.976738  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:04.976824  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:07.476516  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:05.033660  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:07.034297  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:09.976551  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:11.977337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:09.534321  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:11.534699  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:14.476763  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:16.477351  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:14.033838  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:16.034318  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:18.976865  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:20.977366  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:18.533927  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:20.534762  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.034186  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:23.477097  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.976964  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:25.034285  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:27.534416  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:28.476490  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.477181  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:30.033979  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.534354  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:32.977105  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:35.477096  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:37.477182  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:34.534436  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:37.034012  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:39.976471  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:42.476550  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:39.534598  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:41.534728  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:44.976701  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:46.976746  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:44.033664  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:46.534914  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:49.476635  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:51.476946  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:48.535136  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:51.034336  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:53.976362  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:55.976980  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:16:53.534196  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:55.534525  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:57.535035  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:16:58.476831  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.477321  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:00.033962  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.534939  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:02.976221  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.477114  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:07.477398  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:05.033341  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:07.033678  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.034288  265084 node_ready.go:58] node "default-k8s-different-port-20220531175509-6903" has status "Ready":"False"
	I0531 18:17:09.536916  265084 node_ready.go:38] duration metric: took 4m0.012822769s waiting for node "default-k8s-different-port-20220531175509-6903" to be "Ready" ...
	I0531 18:17:09.538829  265084 out.go:177] 
	W0531 18:17:09.540332  265084 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:17:09.540349  265084 out.go:239] * 
	W0531 18:17:09.541063  265084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:17:09.542651  265084 out.go:177] 
	I0531 18:17:09.976861  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:12.476674  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:14.977142  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:17.477283  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:19.976577  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:21.978337  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:24.476234  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:26.476575  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:28.977103  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:31.476611  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:33.976344  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:35.977204  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:38.476416  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:40.977195  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:43.476141  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:45.476421  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:47.476462  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:49.476517  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:51.477331  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:53.977100  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:56.476989  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:17:58.477779  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:00.976553  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:03.477250  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:05.976740  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.476618  269289 node_ready.go:58] node "embed-certs-20220531175604-6903" has status "Ready":"False"
	I0531 18:18:08.978675  269289 node_ready.go:38] duration metric: took 4m0.015379225s waiting for node "embed-certs-20220531175604-6903" to be "Ready" ...
	I0531 18:18:08.980830  269289 out.go:177] 
	W0531 18:18:08.982370  269289 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0531 18:18:08.982392  269289 out.go:239] * 
	W0531 18:18:08.983213  269289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:18:08.984834  269289 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8ae686ba129b3       6de166512aa22       54 seconds ago      Running             kindnet-cni               4                   e4c8266a862fc
	e77aa333c770d       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   e4c8266a862fc
	44bc935b7eaae       4c03754524064       13 minutes ago      Running             kube-proxy                0                   e83bbc46b3d7b
	68cad910900a4       595f327f224a4       13 minutes ago      Running             kube-scheduler            2                   292f35260c680
	86a97a48de4c0       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   7f60645170e76
	bc23fd1cfc64c       df7b72818ad2e       13 minutes ago      Running             kube-controller-manager   2                   42a69ebb96716
	3eb0415d100e5       8fa62c12256df       13 minutes ago      Running             kube-apiserver            2                   0066133f16a45
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-05-31 18:09:08 UTC, end at Tue 2022-05-31 18:27:12 UTC. --
	May 31 18:19:32 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:19:32.749628389Z" level=info msg="RemoveContainer for \"837b6342a4f49f8ff8e60dba74e0384b4a20a5901e5bfa5d86b5947d3e712c1a\" returns successfully"
	May 31 18:19:44 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:19:44.042804481Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	May 31 18:19:44 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:19:44.054445620Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5\""
	May 31 18:19:44 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:19:44.054874776Z" level=info msg="StartContainer for \"fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5\""
	May 31 18:19:44 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:19:44.114869913Z" level=info msg="StartContainer for \"fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5\" returns successfully"
	May 31 18:22:24 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:24.339432194Z" level=info msg="shim disconnected" id=fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5
	May 31 18:22:24 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:24.339503480Z" level=warning msg="cleaning up after shim disconnected" id=fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5 namespace=k8s.io
	May 31 18:22:24 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:24.339517398Z" level=info msg="cleaning up dead shim"
	May 31 18:22:24 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:24.348491777Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:22:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4116 runtime=io.containerd.runc.v2\n"
	May 31 18:22:25 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:25.038772824Z" level=info msg="RemoveContainer for \"fe70a30634ea97d62f66f1194dd9aa88573ebb2cf084f0d7ca32e561152178fa\""
	May 31 18:22:25 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:25.043456963Z" level=info msg="RemoveContainer for \"fe70a30634ea97d62f66f1194dd9aa88573ebb2cf084f0d7ca32e561152178fa\" returns successfully"
	May 31 18:22:47 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:47.042967275Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	May 31 18:22:47 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:47.055720383Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f\""
	May 31 18:22:47 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:47.056143690Z" level=info msg="StartContainer for \"e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f\""
	May 31 18:22:47 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:22:47.205051171Z" level=info msg="StartContainer for \"e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f\" returns successfully"
	May 31 18:25:27 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:27.438330029Z" level=info msg="shim disconnected" id=e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f
	May 31 18:25:27 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:27.438386475Z" level=warning msg="cleaning up after shim disconnected" id=e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f namespace=k8s.io
	May 31 18:25:27 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:27.438397598Z" level=info msg="cleaning up dead shim"
	May 31 18:25:27 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:27.446988256Z" level=warning msg="cleanup warnings time=\"2022-05-31T18:25:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4217 runtime=io.containerd.runc.v2\n"
	May 31 18:25:28 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:28.343508337Z" level=info msg="RemoveContainer for \"fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5\""
	May 31 18:25:28 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:25:28.348046037Z" level=info msg="RemoveContainer for \"fbcb7226bcb5a6079d3af1e296626161285ae03ac51daadf973be3b659452fe5\" returns successfully"
	May 31 18:26:18 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:26:18.042777810Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	May 31 18:26:18 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:26:18.054358776Z" level=info msg="CreateContainer within sandbox \"e4c8266a862fc482bc38910f35ef3c8cd3be9ccd386c92c693be2d76a125394f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8ae686ba129b3c04ee8bdf51b0719630f3659c1e58aaaf8cd9b83ebafbfc0338\""
	May 31 18:26:18 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:26:18.054790666Z" level=info msg="StartContainer for \"8ae686ba129b3c04ee8bdf51b0719630f3659c1e58aaaf8cd9b83ebafbfc0338\""
	May 31 18:26:18 embed-certs-20220531175604-6903 containerd[380]: time="2022-05-31T18:26:18.205445590Z" level=info msg="StartContainer for \"8ae686ba129b3c04ee8bdf51b0719630f3659c1e58aaaf8cd9b83ebafbfc0338\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531175604-6903
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531175604-6903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531175604-6903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T18_13_56_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 18:13:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531175604-6903
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:27:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:24:23 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:24:23 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:24:23 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 31 May 2022 18:24:23 +0000   Tue, 31 May 2022 18:13:50 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    embed-certs-20220531175604-6903
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                9377e8f5-ae2b-465c-b601-bd790903b8eb
	  Boot ID:                    965b7680-1d1a-47a1-b524-025724bc52ff
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220531175604-6903                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-2cxvx                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-embed-certs-20220531175604-6903             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-20220531175604-6903    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-ffdqp                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-20220531175604-6903             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 13m                kube-proxy  
	  Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node embed-certs-20220531175604-6903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.439933] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.023816] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.500003] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.519909] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.511923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.027917] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.415923] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +1.015850] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.508019] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +0.511879] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 4f e8 92 eb 6e 08 06
	[  +0.515931] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	[  +1.019918] IPv4: martian source 10.244.0.138 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca e4 25 6b e6 fd 08 06
	
	* 
	* ==> etcd [86a97a48de4c022f7d4dd27bedcede1b2552effe64b4c218f7ca4157ffaa5033] <==
	* {"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-05-31T18:13:49.904Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-05-31T18:13:50.631Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:embed-certs-20220531175604-6903 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-31T18:13:50.632Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-31T18:13:50.634Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-31T18:13:50.634Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-31T18:23:50.749Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":652}
	{"level":"info","ts":"2022-05-31T18:23:50.750Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":652,"took":"698.817µs"}
	
	* 
	* ==> kernel <==
	*  18:27:12 up  2:09,  0 users,  load average: 0.22, 0.26, 0.52
	Linux embed-certs-20220531175604-6903 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3eb0415d100e52bfe0c1104b9ddf8b526fb204e0137f70d1c939bf1abb69a44e] <==
	* I0531 18:17:11.104233       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:18:53.841803       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:18:53.841887       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:18:53.841905       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:19:53.842763       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:19:53.842815       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:19:53.842822       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:21:53.843000       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:21:53.843076       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:21:53.843093       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:23:53.849322       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:23:53.849392       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:23:53.849400       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:24:53.850143       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:24:53.850204       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:24:53.850211       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0531 18:26:53.850664       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 18:26:53.850733       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:26:53.850741       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bc23fd1cfc64c42bba5c81e5279c39f1c49db486de602c2efbcd6b3eb2c19f97] <==
	* W0531 18:21:08.624713       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:21:38.213707       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:21:38.640985       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:08.222273       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:08.654733       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:22:38.232231       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:22:38.668567       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:08.242838       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:08.682484       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:23:38.253931       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:23:38.695882       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:08.264931       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:08.709653       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:24:38.277804       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:24:38.726396       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:25:08.301486       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:25:08.739831       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:25:38.322954       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:25:38.754140       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:26:08.339658       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:26:08.767821       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:26:38.356675       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:26:38.782603       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 18:27:08.372979       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 18:27:08.796800       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [44bc935b7eaae3b821bde04fc2059159d8351a1ce19b072cbedafb551488d14f] <==
	* I0531 18:14:10.520846       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 18:14:10.520915       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 18:14:10.520954       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 18:14:10.725611       1 server_others.go:206] "Using iptables Proxier"
	I0531 18:14:10.725654       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:14:10.725666       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 18:14:10.725691       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 18:14:10.726120       1 server.go:656] "Version info" version="v1.23.6"
	I0531 18:14:10.726896       1 config.go:317] "Starting service config controller"
	I0531 18:14:10.726928       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 18:14:10.727273       1 config.go:226] "Starting endpoint slice config controller"
	I0531 18:14:10.727294       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 18:14:10.827115       1 shared_informer.go:247] Caches are synced for service config 
	I0531 18:14:10.827828       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [68cad910900a461fb5de4d316889c9efed39c08a2b46073700308723fac57649] <==
	* W0531 18:13:53.016951       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:13:53.018021       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:13:53.017084       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:13:53.018050       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:13:53.018160       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:13:53.018227       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:13:53.018334       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:13:53.018373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:13:53.018387       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:13:53.018426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:13:53.018453       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:13:53.018490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:13:53.018458       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:13:53.018510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:13:53.902751       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:13:53.902799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:13:53.909875       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:13:53.909929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:13:53.920925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:13:53.920954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:13:54.019088       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:13:54.019118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:13:54.102584       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:13:54.102632       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 18:13:56.308272       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 18:09:08 UTC, end at Tue 2022-05-31 18:27:12 UTC. --
	May 31 18:25:40 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:40.040889    2864 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2cxvx_kube-system(acc14297-39c7-4997-9785-f1c36fe06ea9)\"" pod="kube-system/kindnet-2cxvx" podUID=acc14297-39c7-4997-9785-f1c36fe06ea9
	May 31 18:25:41 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:41.343635    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:46 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:46.345228    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:51 embed-certs-20220531175604-6903 kubelet[2864]: I0531 18:25:51.040733    2864 scope.go:110] "RemoveContainer" containerID="e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f"
	May 31 18:25:51 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:51.041140    2864 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2cxvx_kube-system(acc14297-39c7-4997-9785-f1c36fe06ea9)\"" pod="kube-system/kindnet-2cxvx" podUID=acc14297-39c7-4997-9785-f1c36fe06ea9
	May 31 18:25:51 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:51.346417    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:25:56 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:25:56.347056    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:01 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:01.347733    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:03 embed-certs-20220531175604-6903 kubelet[2864]: I0531 18:26:03.041215    2864 scope.go:110] "RemoveContainer" containerID="e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f"
	May 31 18:26:03 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:03.041583    2864 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2cxvx_kube-system(acc14297-39c7-4997-9785-f1c36fe06ea9)\"" pod="kube-system/kindnet-2cxvx" podUID=acc14297-39c7-4997-9785-f1c36fe06ea9
	May 31 18:26:06 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:06.348353    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:11 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:11.349828    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:16 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:16.350805    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:18 embed-certs-20220531175604-6903 kubelet[2864]: I0531 18:26:18.040524    2864 scope.go:110] "RemoveContainer" containerID="e77aa333c770d0507b05e79792d283f6b10345dfb4868eafe79280a12690fb5f"
	May 31 18:26:21 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:21.352222    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:26 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:26.353214    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:31 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:31.354879    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:36 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:36.356399    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:41 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:41.357448    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:46 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:46.358222    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:51 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:51.359264    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:26:56 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:26:56.360482    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:27:01 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:27:01.362100    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:27:06 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:27:06.363605    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 31 18:27:11 embed-certs-20220531175604-6903 kubelet[2864]: E0531 18:27:11.364579    2864 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht: exit status 1 (52.397971ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-tnlml" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-6mjhp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-znnfl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-h54ht" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531175604-6903 describe pod coredns-64897985d-tnlml metrics-server-b955d9d8-6mjhp storage-provisioner dashboard-metrics-scraper-56974995fc-znnfl kubernetes-dashboard-8469778f77-h54ht: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.39s)

                                                
                                    

Test pass (226/265)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.07
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.6/json-events 5.04
11 TestDownloadOnly/v1.23.6/preload-exists 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.19
18 TestDownloadOnlyKic 2.96
19 TestBinaryMirror 0.84
20 TestOffline 89.89
22 TestAddons/Setup 93.72
24 TestAddons/parallel/Registry 15.37
25 TestAddons/parallel/Ingress 30.27
26 TestAddons/parallel/MetricsServer 5.44
27 TestAddons/parallel/HelmTiller 14.25
29 TestAddons/parallel/CSI 46.28
31 TestAddons/serial/GCPAuth 33.78
32 TestAddons/StoppedEnableDisable 20.24
33 TestCertOptions 51.27
34 TestCertExpiration 232.26
36 TestForceSystemdFlag 49.66
37 TestForceSystemdEnv 38.72
38 TestKVMDriverInstallOrUpdate 4.9
42 TestErrorSpam/setup 27.24
43 TestErrorSpam/start 0.9
44 TestErrorSpam/status 1.07
45 TestErrorSpam/pause 2.22
46 TestErrorSpam/unpause 1.48
47 TestErrorSpam/stop 14.84
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 45.37
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 15.46
54 TestFunctional/serial/KubeContext 0.04
55 TestFunctional/serial/KubectlGetPods 0.05
58 TestFunctional/serial/CacheCmd/cache/add_remote 3.08
59 TestFunctional/serial/CacheCmd/cache/add_local 1.89
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
61 TestFunctional/serial/CacheCmd/cache/list 0.06
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
63 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
64 TestFunctional/serial/CacheCmd/cache/delete 0.12
65 TestFunctional/serial/MinikubeKubectlCmd 0.21
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 41.6
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 1.05
70 TestFunctional/serial/LogsFileCmd 1.08
72 TestFunctional/parallel/ConfigCmd 0.44
73 TestFunctional/parallel/DashboardCmd 13.6
74 TestFunctional/parallel/DryRun 0.56
75 TestFunctional/parallel/InternationalLanguage 0.77
76 TestFunctional/parallel/StatusCmd 1.4
79 TestFunctional/parallel/ServiceCmd 9.22
80 TestFunctional/parallel/ServiceCmdConnect 11.69
81 TestFunctional/parallel/AddonsCmd 0.17
82 TestFunctional/parallel/PersistentVolumeClaim 32.8
84 TestFunctional/parallel/SSHCmd 0.84
85 TestFunctional/parallel/CpCmd 1.47
86 TestFunctional/parallel/MySQL 23.54
87 TestFunctional/parallel/FileSync 0.41
88 TestFunctional/parallel/CertSync 2.22
92 TestFunctional/parallel/NodeLabels 0.1
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
96 TestFunctional/parallel/Version/short 0.08
97 TestFunctional/parallel/Version/components 2.39
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
101 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
102 TestFunctional/parallel/ImageCommands/ImageBuild 3.36
103 TestFunctional/parallel/ImageCommands/Setup 1.13
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.22
109 TestFunctional/parallel/ProfileCmd/profile_list 0.6
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.34
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.53
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.4
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.84
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.16
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
127 TestFunctional/parallel/MountCmd/any-port 14.83
128 TestFunctional/parallel/MountCmd/specific-port 2.51
129 TestFunctional/delete_addon-resizer_images 0.09
130 TestFunctional/delete_my-image_image 0.03
131 TestFunctional/delete_minikube_cached_images 0.03
134 TestIngressAddonLegacy/StartLegacyK8sCluster 75.4
136 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.1
137 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.35
138 TestIngressAddonLegacy/serial/ValidateIngressAddons 33.49
141 TestJSONOutput/start/Command 44.84
142 TestJSONOutput/start/Audit 0
144 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/pause/Command 0.65
148 TestJSONOutput/pause/Audit 0
150 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/unpause/Command 0.59
154 TestJSONOutput/unpause/Audit 0
156 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/stop/Command 15.67
160 TestJSONOutput/stop/Audit 0
162 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
164 TestErrorJSONOutput 0.28
166 TestKicCustomNetwork/create_custom_network 31.17
167 TestKicCustomNetwork/use_default_bridge_network 25.85
168 TestKicExistingNetwork 26.17
169 TestKicCustomSubnet 26.41
170 TestMainNoArgs 0.06
171 TestMinikubeProfile 54.97
174 TestMountStart/serial/StartWithMountFirst 4.61
175 TestMountStart/serial/VerifyMountFirst 0.31
176 TestMountStart/serial/StartWithMountSecond 4.69
177 TestMountStart/serial/VerifyMountSecond 0.32
178 TestMountStart/serial/DeleteFirst 1.79
179 TestMountStart/serial/VerifyMountPostDelete 0.32
180 TestMountStart/serial/Stop 1.27
181 TestMountStart/serial/RestartStopped 6.46
182 TestMountStart/serial/VerifyMountPostStop 0.32
185 TestMultiNode/serial/FreshStart2Nodes 76.13
186 TestMultiNode/serial/DeployApp2Nodes 4.01
187 TestMultiNode/serial/PingHostFrom2Pods 0.78
188 TestMultiNode/serial/AddNode 41.27
189 TestMultiNode/serial/ProfileList 0.34
190 TestMultiNode/serial/CopyFile 11.41
191 TestMultiNode/serial/StopNode 2.39
192 TestMultiNode/serial/StartAfterStop 35.94
193 TestMultiNode/serial/RestartKeepsNodes 172.38
194 TestMultiNode/serial/DeleteNode 5.09
195 TestMultiNode/serial/StopMultiNode 40.18
196 TestMultiNode/serial/RestartMultiNode 88.11
197 TestMultiNode/serial/ValidateNameConflict 31.86
202 TestPreload 133.64
204 TestScheduledStopUnix 119.96
207 TestInsufficientStorage 16.81
208 TestRunningBinaryUpgrade 89.32
210 TestKubernetesUpgrade 145.06
211 TestMissingContainerUpgrade 144.08
213 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
214 TestNoKubernetes/serial/StartWithK8s 64.03
215 TestNoKubernetes/serial/StartWithStopK8s 19.57
216 TestNoKubernetes/serial/Start 4.44
217 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
218 TestNoKubernetes/serial/ProfileList 4.04
226 TestNetworkPlugins/group/false 0.48
230 TestNoKubernetes/serial/Stop 5.49
231 TestNoKubernetes/serial/StartNoArgs 6.77
232 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
241 TestPause/serial/Start 60.33
242 TestStoppedBinaryUpgrade/Setup 0.37
243 TestStoppedBinaryUpgrade/Upgrade 92.95
244 TestPause/serial/SecondStartNoReconfiguration 17.27
245 TestPause/serial/Pause 1.39
246 TestPause/serial/VerifyStatus 0.39
247 TestPause/serial/Unpause 0.79
248 TestPause/serial/PauseAgain 5.37
249 TestPause/serial/DeletePaused 5.26
250 TestPause/serial/VerifyDeletedResources 0.88
251 TestNetworkPlugins/group/auto/Start 74.81
252 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
253 TestNetworkPlugins/group/kindnet/Start 70.96
254 TestNetworkPlugins/group/cilium/Start 86.79
255 TestNetworkPlugins/group/auto/KubeletFlags 0.4
256 TestNetworkPlugins/group/auto/NetCatPod 8.21
257 TestNetworkPlugins/group/auto/DNS 0.12
258 TestNetworkPlugins/group/auto/Localhost 0.11
259 TestNetworkPlugins/group/auto/HairPin 0.12
261 TestNetworkPlugins/group/enable-default-cni/Start 318.81
262 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
263 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
264 TestNetworkPlugins/group/kindnet/NetCatPod 9.39
265 TestNetworkPlugins/group/kindnet/DNS 0.15
266 TestNetworkPlugins/group/kindnet/Localhost 0.14
267 TestNetworkPlugins/group/kindnet/HairPin 0.13
268 TestNetworkPlugins/group/bridge/Start 292.66
269 TestNetworkPlugins/group/cilium/ControllerPod 5.02
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.37
271 TestNetworkPlugins/group/cilium/NetCatPod 9.95
272 TestNetworkPlugins/group/cilium/DNS 0.14
273 TestNetworkPlugins/group/cilium/Localhost 0.11
274 TestNetworkPlugins/group/cilium/HairPin 0.13
276 TestStartStop/group/old-k8s-version/serial/FirstStart 100.83
277 TestStartStop/group/old-k8s-version/serial/DeployApp 8.3
278 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.55
279 TestStartStop/group/old-k8s-version/serial/Stop 20.15
280 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
281 TestStartStop/group/old-k8s-version/serial/SecondStart 427.82
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
284 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
285 TestNetworkPlugins/group/bridge/NetCatPod 8.28
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
291 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
292 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
293 TestStartStop/group/old-k8s-version/serial/Pause 2.97
297 TestStartStop/group/newest-cni/serial/FirstStart 248.34
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.55
304 TestStartStop/group/newest-cni/serial/Stop 20.07
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/newest-cni/serial/SecondStart 34.11
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.57
313 TestStartStop/group/no-preload/serial/Stop 11.2
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.55
317 TestStartStop/group/default-k8s-different-port/serial/Stop 9.45
318 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.53
321 TestStartStop/group/embed-certs/serial/Stop 10.41
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (13.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220531171228-6903 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220531171228-6903 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.074259414s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220531171228-6903
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220531171228-6903: exit status 85 (73.890064ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:12:28
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:12:28.584975    6915 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:12:28.585123    6915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:12:28.585136    6915 out.go:309] Setting ErrFile to fd 2...
	I0531 17:12:28.585142    6915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:12:28.585241    6915 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	W0531 17:12:28.585353    6915 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: no such file or directory
	I0531 17:12:28.585577    6915 out.go:303] Setting JSON to true
	I0531 17:12:28.586306    6915 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3300,"bootTime":1654013849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:12:28.586358    6915 start.go:125] virtualization: kvm guest
	I0531 17:12:28.588787    6915 out.go:97] [download-only-20220531171228-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:12:28.588880    6915 notify.go:193] Checking for updates...
	W0531 17:12:28.588882    6915 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 17:12:28.590187    6915 out.go:169] MINIKUBE_LOCATION=14079
	I0531 17:12:28.591514    6915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:12:28.592896    6915 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:12:28.594170    6915 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:12:28.595503    6915 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0531 17:12:28.597901    6915 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 17:12:28.598108    6915 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:12:28.630640    6915 docker.go:137] docker version: linux-20.10.16
	I0531 17:12:28.630708    6915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:12:29.310366    6915 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-31 17:12:28.654105042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:12:29.310492    6915 docker.go:254] overlay module found
	I0531 17:12:29.312178    6915 out.go:97] Using the docker driver based on user configuration
	I0531 17:12:29.312195    6915 start.go:284] selected driver: docker
	I0531 17:12:29.312201    6915 start.go:806] validating driver "docker" against <nil>
	I0531 17:12:29.312357    6915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:12:29.408944    6915 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-05-31 17:12:29.337035908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:12:29.409094    6915 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:12:29.410154    6915 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0531 17:12:29.410309    6915 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 17:12:29.412264    6915 out.go:169] Using Docker driver with the root privilege
	I0531 17:12:29.413574    6915 cni.go:95] Creating CNI manager for ""
	I0531 17:12:29.413588    6915 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0531 17:12:29.413603    6915 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:12:29.413613    6915 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0531 17:12:29.413618    6915 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 17:12:29.413637    6915 start_flags.go:306] config:
	{Name:download-only-20220531171228-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220531171228-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:12:29.415078    6915 out.go:97] Starting control plane node download-only-20220531171228-6903 in cluster download-only-20220531171228-6903
	I0531 17:12:29.415095    6915 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0531 17:12:29.416356    6915 out.go:97] Pulling base image ...
	I0531 17:12:29.416376    6915 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0531 17:12:29.416424    6915 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:12:29.455211    6915 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 17:12:29.455228    6915 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 17:12:29.455464    6915 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory
	I0531 17:12:29.455546    6915 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 17:12:29.472563    6915 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0531 17:12:29.472585    6915 cache.go:57] Caching tarball of preloaded images
	I0531 17:12:29.472714    6915 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0531 17:12:29.474799    6915 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0531 17:12:29.474814    6915 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0531 17:12:29.528179    6915 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0531 17:12:37.708107    6915 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531171228-6903"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (5.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220531171228-6903 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220531171228-6903 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.041179633s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (5.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220531171228-6903
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220531171228-6903: exit status 85 (71.357486ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:12:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:12:41.733364    7082 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:12:41.733457    7082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:12:41.733465    7082 out.go:309] Setting ErrFile to fd 2...
	I0531 17:12:41.733469    7082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:12:41.733555    7082 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	W0531 17:12:41.733665    7082 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/config/config.json: no such file or directory
	I0531 17:12:41.733767    7082 out.go:303] Setting JSON to true
	I0531 17:12:41.734460    7082 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3313,"bootTime":1654013849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:12:41.734515    7082 start.go:125] virtualization: kvm guest
	I0531 17:12:41.736671    7082 out.go:97] [download-only-20220531171228-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:12:41.736764    7082 notify.go:193] Checking for updates...
	I0531 17:12:41.738265    7082 out.go:169] MINIKUBE_LOCATION=14079
	I0531 17:12:41.739598    7082 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:12:41.741917    7082 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:12:41.743319    7082 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:12:41.744589    7082 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531171228-6903"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220531171228-6903
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220531171247-6903 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220531171247-6903 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.827523863s)
helpers_test.go:175: Cleaning up "download-docker-20220531171247-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220531171247-6903
--- PASS: TestDownloadOnlyKic (2.96s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220531171250-6903 --alsologtostderr --binary-mirror http://127.0.0.1:41255 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220531171250-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220531171250-6903
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (89.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220531173859-6903 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220531173859-6903 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m27.468551832s)
helpers_test.go:175: Cleaning up "offline-containerd-20220531173859-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220531173859-6903

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220531173859-6903: (2.423915165s)
--- PASS: TestOffline (89.89s)

                                                
                                    
x
+
TestAddons/Setup (93.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220531171251-6903 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220531171251-6903 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m33.71987288s)
--- PASS: TestAddons/Setup (93.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 9.822555ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-sz7qs" [ea28df44-6090-4a11-96ac-347d2603c2ab] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008607556s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-v6w9h" [b69dbedc-1d78-4533-af3e-40436b75454e] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007115136s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220531171251-6903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220531171251-6903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.643120015s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 ip
2022/05/31 17:14:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.37s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (30.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220531171251-6903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Done: kubectl --context addons-20220531171251-6903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.3832259s)
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220531171251-6903 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220531171251-6903 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c5a5da4c-3101-479a-bb24-11bbfb461191] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [c5a5da4c-3101-479a-bb24-11bbfb461191] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005360585s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220531171251-6903 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:236: (dbg) Done: kubectl --context addons-20220531171251-6903 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.088369983s)
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable ingress-dns --alsologtostderr -v=1: (1.193212118s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable ingress --alsologtostderr -v=1: (7.472668411s)
--- PASS: TestAddons/parallel/Ingress (30.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.718601ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-bd6f4dd56-hnht9" [5186ce2d-faf6-4f33-83fd-a1c07ac30251] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008455691s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220531171251-6903 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.44s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.25s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 8.401168ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-whvfv" [dec32938-9ab8-4639-a6cb-23831d46f338] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008336752s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220531171251-6903 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220531171251-6903 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.671767442s)
addons_test.go:428: kubectl --context addons-20220531171251-6903 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 10.488603ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531171251-6903 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [76e99694-16e8-4e79-99e1-10825ab8e963] Pending
helpers_test.go:342: "task-pv-pod" [76e99694-16e8-4e79-99e1-10825ab8e963] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [76e99694-16e8-4e79-99e1-10825ab8e963] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.007756191s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531171251-6903 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531171251-6903 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531171251-6903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [43515c55-55fd-4f39-bdf5-27491c4acc87] Pending
helpers_test.go:342: "task-pv-pod-restore" [43515c55-55fd-4f39-bdf5-27491c4acc87] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [43515c55-55fd-4f39-bdf5-27491c4acc87] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.006025787s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220531171251-6903 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.865476035s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (33.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220531171251-6903 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8c27840a-a6f7-4cbc-b937-b9dfc0f825e7] Pending
helpers_test.go:342: "busybox" [8c27840a-a6f7-4cbc-b937-b9dfc0f825e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8c27840a-a6f7-4cbc-b937-b9dfc0f825e7] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 7.006040258s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220531171251-6903 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220531171251-6903 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220531171251-6903 addons disable gcp-auth --alsologtostderr -v=1: (5.755520446s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220531171251-6903 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220531171251-6903 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-7xpm5" [b30ca6a1-db59-4ffa-99f4-94864314a384] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-7xpm5" [b30ca6a1-db59-4ffa-99f4-94864314a384] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 11.005616199s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220531171251-6903 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-rg6d5" [7d4f6d2e-ece0-4a9c-8505-bbcff13b9e40] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-rg6d5" [7d4f6d2e-ece0-4a9c-8505-bbcff13b9e40] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.005210811s
--- PASS: TestAddons/serial/GCPAuth (33.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220531171251-6903
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220531171251-6903: (20.058163866s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220531171251-6903
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220531171251-6903
--- PASS: TestAddons/StoppedEnableDisable (20.24s)

                                                
                                    
x
+
TestCertOptions (51.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220531174109-6903 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220531174109-6903 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (48.029230208s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220531174109-6903 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220531174109-6903 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220531174109-6903 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220531174109-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220531174109-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220531174109-6903: (2.507682217s)
--- PASS: TestCertOptions (51.27s)

                                                
                                    
x
+
TestCertExpiration (232.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220531174046-6903 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220531174046-6903 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.782690065s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220531174046-6903 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220531174046-6903 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.928932842s)
helpers_test.go:175: Cleaning up "cert-expiration-20220531174046-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220531174046-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220531174046-6903: (2.543528789s)
--- PASS: TestCertExpiration (232.26s)

                                                
                                    
x
+
TestForceSystemdFlag (49.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220531174034-6903 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220531174034-6903 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.556213544s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220531174034-6903 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220531174034-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220531174034-6903

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220531174034-6903: (2.544582035s)
--- PASS: TestForceSystemdFlag (49.66s)

                                                
                                    
x
+
TestForceSystemdEnv (38.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220531174030-6903 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220531174030-6903 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.638306263s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220531174030-6903 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220531174030-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220531174030-6903
E0531 17:41:05.001984    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220531174030-6903: (4.708342844s)
--- PASS: TestForceSystemdEnv (38.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.90s)

                                                
                                    
x
+
TestErrorSpam/setup (27.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220531171611-6903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220531171611-6903 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220531171611-6903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220531171611-6903 --driver=docker  --container-runtime=containerd: (27.235580717s)
--- PASS: TestErrorSpam/setup (27.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (2.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 pause: (1.2787118s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 pause
--- PASS: TestErrorSpam/pause (2.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (14.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 stop: (14.588349659s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220531171611-6903 --log_dir /tmp/nospam-20220531171611-6903 stop
--- PASS: TestErrorSpam/stop (14.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/test/nested/copy/6903/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220531171704-6903 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.365425758s)
--- PASS: TestFunctional/serial/StartWithProxy (45.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220531171704-6903 --alsologtostderr -v=8: (15.454567775s)
functional_test.go:655: soft start took 15.455180849s for "functional-20220531171704-6903" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220531171704-6903 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add k8s.gcr.io/pause:3.3: (1.346915625s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add k8s.gcr.io/pause:latest: (1.024717048s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220531171704-6903 /tmp/TestFunctionalserialCacheCmdcacheadd_local3874967943/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add minikube-local-cache-test:functional-20220531171704-6903
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 cache add minikube-local-cache-test:functional-20220531171704-6903: (1.61634865s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache delete minikube-local-cache-test:functional-20220531171704-6903
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220531171704-6903
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (337.211442ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cache reload
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 kubectl -- --context functional-20220531171704-6903 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220531171704-6903 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220531171704-6903 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.594927245s)
functional_test.go:753: restart took 41.595047656s for "functional-20220531171704-6903" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220531171704-6903 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 logs: (1.045603911s)
--- PASS: TestFunctional/serial/LogsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 logs --file /tmp/TestFunctionalserialLogsFileCmd2684771884/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 logs --file /tmp/TestFunctionalserialLogsFileCmd2684771884/001/logs.txt: (1.07457655s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 config get cpus: exit status 14 (69.384988ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 config get cpus: exit status 14 (71.625804ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220531171704-6903 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220531171704-6903 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 41444: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220531171704-6903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (234.1168ms)

                                                
                                                
-- stdout --
	* [functional-20220531171704-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:19:19.354365   40883 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:19:19.354473   40883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:19:19.354483   40883 out.go:309] Setting ErrFile to fd 2...
	I0531 17:19:19.354487   40883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:19:19.354579   40883 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:19:19.354787   40883 out.go:303] Setting JSON to false
	I0531 17:19:19.356003   40883 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3710,"bootTime":1654013849,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:19:19.356065   40883 start.go:125] virtualization: kvm guest
	I0531 17:19:19.358082   40883 out.go:177] * [functional-20220531171704-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:19:19.359943   40883 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:19:19.361524   40883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:19:19.363000   40883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:19:19.364463   40883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:19:19.366101   40883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:19:19.368085   40883 config.go:178] Loaded profile config "functional-20220531171704-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:19:19.368655   40883 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:19:19.408993   40883 docker.go:137] docker version: linux-20.10.16
	I0531 17:19:19.409084   40883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:19:19.505905   40883 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:64 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-31 17:19:19.437830453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:19:19.506000   40883 docker.go:254] overlay module found
	I0531 17:19:19.508308   40883 out.go:177] * Using the docker driver based on existing profile
	I0531 17:19:19.509848   40883 start.go:284] selected driver: docker
	I0531 17:19:19.509862   40883 start.go:806] validating driver "docker" against &{Name:functional-20220531171704-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531171704-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-
plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:19:19.509970   40883 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:19:19.512302   40883 out.go:177] 
	W0531 17:19:19.513772   40883 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 17:19:19.515275   40883 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220531171704-6903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220531171704-6903 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (768.124929ms)

                                                
                                                
-- stdout --
	* [functional-20220531171704-6903] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:19:17.204971   39797 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:19:17.205126   39797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:19:17.205140   39797 out.go:309] Setting ErrFile to fd 2...
	I0531 17:19:17.205147   39797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:19:17.205346   39797 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:19:17.206988   39797 out.go:303] Setting JSON to false
	I0531 17:19:17.208458   39797 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3708,"bootTime":1654013849,"procs":510,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:19:17.208526   39797 start.go:125] virtualization: kvm guest
	I0531 17:19:17.211510   39797 out.go:177] * [functional-20220531171704-6903] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	I0531 17:19:17.213266   39797 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:19:17.215377   39797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:19:17.217460   39797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:19:17.219235   39797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:19:17.222129   39797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:19:17.224476   39797 config.go:178] Loaded profile config "functional-20220531171704-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:19:17.225065   39797 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:19:17.280997   39797 docker.go:137] docker version: linux-20.10.16
	I0531 17:19:17.281077   39797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:19:17.411742   39797 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:64 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-31 17:19:17.316008059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:19:17.411867   39797 docker.go:254] overlay module found
	I0531 17:19:17.567339   39797 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0531 17:19:17.651715   39797 start.go:284] selected driver: docker
	I0531 17:19:17.651756   39797 start.go:806] validating driver "docker" against &{Name:functional-20220531171704-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531171704-6903 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-
plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:19:17.651926   39797 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:19:17.743844   39797 out.go:177] 
	W0531 17:19:17.846237   39797 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 17:19:17.877966   39797 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220531171704-6903 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220531171704-6903 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-g4hv6" [c0beb0d6-4f18-4610-b542-afa8485bff80] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54fbb85-g4hv6" [c0beb0d6-4f18-4610-b542-afa8485bff80] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.006287202s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 service list: (1.000332725s)
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:31571
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:31571
--- PASS: TestFunctional/parallel/ServiceCmd (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220531171704-6903 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220531171704-6903 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-84mss" [7d0edfd2-7aff-47eb-8a2e-7df829f2ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-84mss" [7d0edfd2-7aff-47eb-8a2e-7df829f2ce7e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.081580846s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 service hello-node-connect --url
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:31505
functional_test.go:1604: http://192.168.49.2:31505: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-84mss

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31505
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [9dd8b0f8-6682-4a14-b938-589b76c4545a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016202849s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220531171704-6903 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220531171704-6903 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220531171704-6903 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220531171704-6903 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531171704-6903 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [65d80fd9-3c93-490d-96d1-17d8bddc0fcb] Pending
helpers_test.go:342: "sp-pod" [65d80fd9-3c93-490d-96d1-17d8bddc0fcb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [65d80fd9-3c93-490d-96d1-17d8bddc0fcb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.028823561s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220531171704-6903 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220531171704-6903 delete -f testdata/storage-provisioner/pod.yaml: (1.906372186s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531171704-6903 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [96cb51b7-918f-4623-a484-6f72a9ecc25a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [96cb51b7-918f-4623-a484-6f72a9ecc25a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [96cb51b7-918f-4623-a484-6f72a9ecc25a] Running
E0531 17:19:25.230207    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.390604    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.711703    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:26.352256    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006061669s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh -n functional-20220531171704-6903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 cp functional-20220531171704-6903:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1232968503/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh -n functional-20220531171704-6903 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220531171704-6903 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-wbzwd" [a8b20ee8-5ffa-4af1-88f9-85aa0a2c9786] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-wbzwd" [a8b20ee8-5ffa-4af1-88f9-85aa0a2c9786] Running
E0531 17:19:25.073401    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.078977    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.089202    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.109443    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:19:25.149692    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.005589228s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;": exit status 1 (214.544688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:19:30.193333    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;": exit status 1 (200.416851ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;": exit status 1 (201.050839ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;": exit status 1 (133.210259ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 17:19:35.314277    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531171704-6903 exec mysql-b87c45988-wbzwd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/6903/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /etc/test/nested/copy/6903/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/6903.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /etc/ssl/certs/6903.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/6903.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /usr/share/ca-certificates/6903.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/69032.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /etc/ssl/certs/69032.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/69032.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /usr/share/ca-certificates/69032.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220531171704-6903 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo systemctl is-active docker": exit status 1 (404.00113ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo systemctl is-active crio": exit status 1 (434.874441ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 version -o=json --components: (2.386740788s)
--- PASS: TestFunctional/parallel/Version/components (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220531171704-6903
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | sha256:df7b72 | 30.2MB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 353kB  |
| k8s.gcr.io/pause                            | 3.6                            | sha256:6270bb | 302kB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| docker.io/library/minikube-local-cache-test | functional-20220531171704-6903 | sha256:3594a8 | 1.74kB |
| docker.io/library/nginx                     | alpine                         | sha256:b1c3ac | 10.2MB |
| gcr.io/google-containers/addon-resizer      | functional-20220531171704-6903 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | sha256:8fa62c | 32.6MB |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5             | sha256:6de166 | 54MB   |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | sha256:4c0375 | 39.3MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest                         | sha256:0e901e | 56.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | sha256:595f32 | 15.1MB |
| docker.io/library/mysql                     | 5.7                            | sha256:2a0961 | 162MB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format json:
[{"id":"sha256:b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":["docker.io/library/nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10170636"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5
.1-0"],"size":"98888614"},{"id":"sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"30173645"},{"id":"sha256:595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"15134087"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"353405"},{"id":"sha256:2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":["docker.io/library/mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5"],"repoTags":["docker.io/library/mysql:5.7"],"size":"162466158"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216
f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"],"repoTags":[],"size":"73695017"},{"id":"sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":["docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981
bafbb884165e514"],"repoTags":["docker.io/library/nginx:latest"],"size":"56746739"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":["k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"39277919"},{"id":"sha256:3594a863c80789c33791eaf9ea590530ad0a3ae641cf76e42e61fc37bbf6f189","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220531171704-6903"],"size":"1740"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220531171704-6903"],"size":"10823156"},{"id":"sha
256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"32601483"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls --format yaml:
- id: sha256:2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests:
- docker.io/library/mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5
repoTags:
- docker.io/library/mysql:5.7
size: "162466158"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "15134087"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2
repoTags: []
size: "73695017"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "30173645"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests:
- docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514
repoTags:
- docker.io/library/nginx:latest
size: "56746739"
- id: sha256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "32601483"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests:
- docker.io/library/nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989
repoTags:
- docker.io/library/nginx:alpine
size: "10170636"
- id: sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "39277919"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "353405"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:3594a863c80789c33791eaf9ea590530ad0a3ae641cf76e42e61fc37bbf6f189
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220531171704-6903
size: "1740"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh pgrep buildkitd: exit status 1 (379.362765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image build -t localhost/my-image:functional-20220531171704-6903 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 image build -t localhost/my-image:functional-20220531171704-6903 testdata/build: (2.713733079s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220531171704-6903 image build -t localhost/my-image:functional-20220531171704-6903 testdata/build:
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.5s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:db68f8155ccbb4ca80b936fff893bf5eaaac941807dcc999992c5da0eecc200b done
#8 exporting config sha256:cbd1fb6adb7bb989af825bbc0d1690007098d0ca5dc5f0b469442dfe5dc00984 done
#8 naming to localhost/my-image:functional-20220531171704-6903 done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.08615962s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220531171704-6903 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220531171704-6903 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [e6b42cc7-7bff-4225-9465-2d4de4a12b69] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [e6b42cc7-7bff-4225-9465-2d4de4a12b69] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006750892s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "421.52071ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "182.327879ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903: (4.062937197s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1361: Took "404.640315ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "67.02654ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903: (4.308760986s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220531171704-6903 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903: (4.951035071s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531171704-6903 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.107.154.174 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220531171704-6903 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 update-context --alsologtostderr -v=2
2022/05/31 17:19:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image save gcr.io/google-containers/addon-resizer:functional-20220531171704-6903 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image rm gcr.io/google-containers/addon-resizer:functional-20220531171704-6903

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220531171704-6903

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220531171704-6903 /tmp/TestFunctionalparallelMountCmdany-port2380259554/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654017557213588222" to /tmp/TestFunctionalparallelMountCmdany-port2380259554/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654017557213588222" to /tmp/TestFunctionalparallelMountCmdany-port2380259554/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654017557213588222" to /tmp/TestFunctionalparallelMountCmdany-port2380259554/001/test-1654017557213588222
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (425.665461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 31 17:19 created-by-test
-rw-r--r-- 1 docker docker 24 May 31 17:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 31 17:19 test-1654017557213588222
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh cat /mount-9p/test-1654017557213588222

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220531171704-6903 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [8b36bfd2-dee4-4f08-b0f8-397661403685] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [8b36bfd2-dee4-4f08-b0f8-397661403685] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [8b36bfd2-dee4-4f08-b0f8-397661403685] Running
E0531 17:19:27.633087    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
helpers_test.go:342: "busybox-mount" [8b36bfd2-dee4-4f08-b0f8-397661403685] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [8b36bfd2-dee4-4f08-b0f8-397661403685] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.007364983s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220531171704-6903 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220531171704-6903 /tmp/TestFunctionalparallelMountCmdany-port2380259554/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220531171704-6903 /tmp/TestFunctionalparallelMountCmdspecific-port1754487596/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (465.098181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220531171704-6903 /tmp/TestFunctionalparallelMountCmdspecific-port1754487596/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh "sudo umount -f /mount-9p": exit status 1 (390.00037ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220531171704-6903 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220531171704-6903 /tmp/TestFunctionalparallelMountCmdspecific-port1754487596/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220531171704-6903
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220531171704-6903
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220531171704-6903
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220531171940-6903 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0531 17:19:45.554551    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:20:06.034789    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:20:46.995709    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220531171940-6903 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m15.402344202s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons enable ingress --alsologtostderr -v=5: (9.095625587s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (33.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531171940-6903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220531171940-6903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.328710987s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531171940-6903 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531171940-6903 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [5af2f075-484d-4e3a-ace5-56caccc6ffdf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [5af2f075-484d-4e3a-ace5-56caccc6ffdf] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.006746908s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531171940-6903 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons disable ingress-dns --alsologtostderr -v=1: (3.693589982s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220531171940-6903 addons disable ingress --alsologtostderr -v=1: (7.242174892s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (33.49s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220531172141-6903 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0531 17:22:08.915866    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220531172141-6903 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (44.837428873s)
--- PASS: TestJSONOutput/start/Command (44.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220531172141-6903 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220531172141-6903 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220531172141-6903 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220531172141-6903 --output=json --user=testUser: (15.674432146s)
--- PASS: TestJSONOutput/stop/Command (15.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220531172248-6903 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220531172248-6903 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.255169ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"407d9479-eb31-4f33-be61-03764a0aabbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220531172248-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b429d692-a2cf-4eb7-b27c-132ba1c6df73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"dd191923-217e-450e-9295-b2ea6c407de3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d34edde6-7520-4446-9fef-8834e67fcb0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig"}}
	{"specversion":"1.0","id":"c697dd37-64f8-4988-b805-7a3439339abd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube"}}
	{"specversion":"1.0","id":"148b72a0-aa8d-469f-8b31-adb68c3b522e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0cc6a92f-fdb5-49ca-80ce-6ca860569c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220531172248-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220531172248-6903
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220531172248-6903 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220531172248-6903 --network=: (28.973093449s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220531172248-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220531172248-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220531172248-6903: (2.163237279s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220531172319-6903 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220531172319-6903 --network=bridge: (23.816513267s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220531172319-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220531172319-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220531172319-6903: (2.003141762s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.85s)

                                                
                                    
x
+
TestKicExistingNetwork (26.17s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220531172345-6903 --network=existing-network
E0531 17:23:57.611202    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.616452    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.626673    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.646874    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.687855    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.768140    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:57.928513    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:58.249046    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:23:58.889886    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:24:00.170735    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:24:02.731270    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:24:07.852124    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220531172345-6903 --network=existing-network: (23.797438912s)
helpers_test.go:175: Cleaning up "existing-network-20220531172345-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220531172345-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220531172345-6903: (2.169618446s)
--- PASS: TestKicExistingNetwork (26.17s)

                                                
                                    
x
+
TestKicCustomSubnet (26.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220531172411-6903 --subnet=192.168.60.0/24
E0531 17:24:18.092716    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:24:25.072804    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220531172411-6903 --subnet=192.168.60.0/24: (24.204027566s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220531172411-6903 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220531172411-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220531172411-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220531172411-6903: (2.176926162s)
--- PASS: TestKicCustomSubnet (26.41s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:42: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220531172437-6903
E0531 17:24:38.573099    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:24:52.756911    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
minikube_profile_test.go:42: (dbg) Done: out/minikube-linux-amd64 start -p first-20220531172437-6903: (24.206106889s)
minikube_profile_test.go:42: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220531172437-6903
E0531 17:25:19.534336    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
minikube_profile_test.go:42: (dbg) Done: out/minikube-linux-amd64 start -p second-20220531172437-6903: (25.116034711s)
minikube_profile_test.go:49: (dbg) Run:  out/minikube-linux-amd64 profile first-20220531172437-6903
minikube_profile_test.go:53: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:49: (dbg) Run:  out/minikube-linux-amd64 profile second-20220531172437-6903
minikube_profile_test.go:53: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220531172437-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220531172437-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220531172437-6903: (2.217606497s)
helpers_test.go:175: Cleaning up "first-20220531172437-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220531172437-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220531172437-6903: (2.206703142s)
--- PASS: TestMinikubeProfile (54.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220531172532-6903 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220531172532-6903 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.613209312s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220531172532-6903 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220531172532-6903 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220531172532-6903 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.693102684s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220531172532-6903 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.79s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220531172532-6903 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220531172532-6903 --alsologtostderr -v=5: (1.793993126s)
--- PASS: TestMountStart/serial/DeleteFirst (1.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220531172532-6903 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220531172532-6903
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220531172532-6903: (1.265936176s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220531172532-6903
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220531172532-6903: (5.461968389s)
--- PASS: TestMountStart/serial/RestartStopped (6.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220531172532-6903 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0531 17:26:05.001743    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.006989    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.017788    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.038037    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.078593    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.158933    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.319963    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:05.640579    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:06.281287    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:07.562344    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:10.124159    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:15.244799    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:25.485589    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:26:41.454652    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:26:45.966701    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.584476942s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- rollout status deployment/busybox: (2.311164621s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-5788b -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-tb8q5 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-5788b -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-tb8q5 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-5788b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-tb8q5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-5788b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-5788b -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-tb8q5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220531172554-6903 -- exec busybox-7978565885-tb8q5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220531172554-6903 -v 3 --alsologtostderr
E0531 17:27:26.926851    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220531172554-6903 -v 3 --alsologtostderr: (40.55876224s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.27s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp testdata/cp-test.txt multinode-20220531172554-6903:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2497781132/001/cp-test_multinode-20220531172554-6903.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903:/home/docker/cp-test.txt multinode-20220531172554-6903-m02:/home/docker/cp-test_multinode-20220531172554-6903_multinode-20220531172554-6903-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903_multinode-20220531172554-6903-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903:/home/docker/cp-test.txt multinode-20220531172554-6903-m03:/home/docker/cp-test_multinode-20220531172554-6903_multinode-20220531172554-6903-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903_multinode-20220531172554-6903-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp testdata/cp-test.txt multinode-20220531172554-6903-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2497781132/001/cp-test_multinode-20220531172554-6903-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m02:/home/docker/cp-test.txt multinode-20220531172554-6903:/home/docker/cp-test_multinode-20220531172554-6903-m02_multinode-20220531172554-6903.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903-m02_multinode-20220531172554-6903.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m02:/home/docker/cp-test.txt multinode-20220531172554-6903-m03:/home/docker/cp-test_multinode-20220531172554-6903-m02_multinode-20220531172554-6903-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903-m02_multinode-20220531172554-6903-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp testdata/cp-test.txt multinode-20220531172554-6903-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2497781132/001/cp-test_multinode-20220531172554-6903-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m03:/home/docker/cp-test.txt multinode-20220531172554-6903:/home/docker/cp-test_multinode-20220531172554-6903-m03_multinode-20220531172554-6903.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903-m03_multinode-20220531172554-6903.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 cp multinode-20220531172554-6903-m03:/home/docker/cp-test.txt multinode-20220531172554-6903-m02:/home/docker/cp-test_multinode-20220531172554-6903-m03_multinode-20220531172554-6903-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 ssh -n multinode-20220531172554-6903-m02 "sudo cat /home/docker/cp-test_multinode-20220531172554-6903-m03_multinode-20220531172554-6903-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220531172554-6903 node stop m03: (1.256821383s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220531172554-6903 status: exit status 7 (568.597979ms)

                                                
                                                
-- stdout --
	multinode-20220531172554-6903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531172554-6903-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531172554-6903-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr: exit status 7 (568.403306ms)

                                                
                                                
-- stdout --
	multinode-20220531172554-6903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531172554-6903-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531172554-6903-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:28:10.805712   96612 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:28:10.805825   96612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:28:10.805840   96612 out.go:309] Setting ErrFile to fd 2...
	I0531 17:28:10.805847   96612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:28:10.805942   96612 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:28:10.806097   96612 out.go:303] Setting JSON to false
	I0531 17:28:10.806121   96612 mustload.go:65] Loading cluster: multinode-20220531172554-6903
	I0531 17:28:10.806416   96612 config.go:178] Loaded profile config "multinode-20220531172554-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:28:10.806433   96612 status.go:253] checking status of multinode-20220531172554-6903 ...
	I0531 17:28:10.806786   96612 cli_runner.go:164] Run: docker container inspect multinode-20220531172554-6903 --format={{.State.Status}}
	I0531 17:28:10.837749   96612 status.go:328] multinode-20220531172554-6903 host status = "Running" (err=<nil>)
	I0531 17:28:10.837772   96612 host.go:66] Checking if "multinode-20220531172554-6903" exists ...
	I0531 17:28:10.838042   96612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531172554-6903
	I0531 17:28:10.866678   96612 host.go:66] Checking if "multinode-20220531172554-6903" exists ...
	I0531 17:28:10.866912   96612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:28:10.866952   96612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531172554-6903
	I0531 17:28:10.894331   96612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/multinode-20220531172554-6903/id_rsa Username:docker}
	I0531 17:28:10.971255   96612 ssh_runner.go:195] Run: systemctl --version
	I0531 17:28:10.974512   96612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:28:10.982737   96612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:28:11.078302   96612 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-31 17:28:11.009765573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:28:11.078821   96612 kubeconfig.go:92] found "multinode-20220531172554-6903" server: "https://192.168.49.2:8443"
	I0531 17:28:11.078845   96612 api_server.go:165] Checking apiserver status ...
	I0531 17:28:11.078872   96612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 17:28:11.087717   96612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	I0531 17:28:11.094529   96612 api_server.go:181] apiserver freezer: "8:freezer:/docker/1e14480d1f0202075d96ec0c80db68c1ffcd0ef89c814d460aac0be095f37319/kubepods/burstable/pod93052553ed525b25b3b41400fe6962ff/8e69301ee471436a109779679fb06561f0f272b52ded64adfe92b4fd0d861668"
	I0531 17:28:11.094582   96612 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1e14480d1f0202075d96ec0c80db68c1ffcd0ef89c814d460aac0be095f37319/kubepods/burstable/pod93052553ed525b25b3b41400fe6962ff/8e69301ee471436a109779679fb06561f0f272b52ded64adfe92b4fd0d861668/freezer.state
	I0531 17:28:11.100480   96612 api_server.go:203] freezer state: "THAWED"
	I0531 17:28:11.100513   96612 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 17:28:11.105079   96612 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 17:28:11.105100   96612 status.go:419] multinode-20220531172554-6903 apiserver status = Running (err=<nil>)
	I0531 17:28:11.105111   96612 status.go:255] multinode-20220531172554-6903 status: &{Name:multinode-20220531172554-6903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 17:28:11.105132   96612 status.go:253] checking status of multinode-20220531172554-6903-m02 ...
	I0531 17:28:11.105449   96612 cli_runner.go:164] Run: docker container inspect multinode-20220531172554-6903-m02 --format={{.State.Status}}
	I0531 17:28:11.138049   96612 status.go:328] multinode-20220531172554-6903-m02 host status = "Running" (err=<nil>)
	I0531 17:28:11.138078   96612 host.go:66] Checking if "multinode-20220531172554-6903-m02" exists ...
	I0531 17:28:11.138308   96612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531172554-6903-m02
	I0531 17:28:11.167249   96612 host.go:66] Checking if "multinode-20220531172554-6903-m02" exists ...
	I0531 17:28:11.167551   96612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 17:28:11.167593   96612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531172554-6903-m02
	I0531 17:28:11.196065   96612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/multinode-20220531172554-6903-m02/id_rsa Username:docker}
	I0531 17:28:11.275260   96612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 17:28:11.283603   96612 status.go:255] multinode-20220531172554-6903-m02 status: &{Name:multinode-20220531172554-6903-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 17:28:11.283642   96612 status.go:253] checking status of multinode-20220531172554-6903-m03 ...
	I0531 17:28:11.283911   96612 cli_runner.go:164] Run: docker container inspect multinode-20220531172554-6903-m03 --format={{.State.Status}}
	I0531 17:28:11.314140   96612 status.go:328] multinode-20220531172554-6903-m03 host status = "Stopped" (err=<nil>)
	I0531 17:28:11.314159   96612 status.go:341] host is not running, skipping remaining checks
	I0531 17:28:11.314167   96612 status.go:255] multinode-20220531172554-6903-m03 status: &{Name:multinode-20220531172554-6903-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220531172554-6903 node start m03 --alsologtostderr: (35.146220362s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (172.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220531172554-6903
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220531172554-6903
E0531 17:28:48.847366    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:28:57.610377    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:29:25.073283    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:29:25.295614    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220531172554-6903: (41.232413464s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true -v=8 --alsologtostderr
E0531 17:31:05.001792    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
E0531 17:31:32.687883    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true -v=8 --alsologtostderr: (2m11.027083376s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220531172554-6903
--- PASS: TestMultiNode/serial/RestartKeepsNodes (172.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220531172554-6903 node delete m03: (4.427284049s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220531172554-6903 stop: (39.948839337s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220531172554-6903 status: exit status 7 (120.090614ms)

                                                
                                                
-- stdout --
	multinode-20220531172554-6903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531172554-6903-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr: exit status 7 (115.745017ms)

                                                
                                                
-- stdout --
	multinode-20220531172554-6903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531172554-6903-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:32:24.852128  106837 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:32:24.852236  106837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:32:24.852245  106837 out.go:309] Setting ErrFile to fd 2...
	I0531 17:32:24.852250  106837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:32:24.852356  106837 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:32:24.852525  106837 out.go:303] Setting JSON to false
	I0531 17:32:24.852547  106837 mustload.go:65] Loading cluster: multinode-20220531172554-6903
	I0531 17:32:24.852922  106837 config.go:178] Loaded profile config "multinode-20220531172554-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0531 17:32:24.852940  106837 status.go:253] checking status of multinode-20220531172554-6903 ...
	I0531 17:32:24.853322  106837 cli_runner.go:164] Run: docker container inspect multinode-20220531172554-6903 --format={{.State.Status}}
	I0531 17:32:24.882389  106837 status.go:328] multinode-20220531172554-6903 host status = "Stopped" (err=<nil>)
	I0531 17:32:24.882407  106837 status.go:341] host is not running, skipping remaining checks
	I0531 17:32:24.882413  106837 status.go:255] multinode-20220531172554-6903 status: &{Name:multinode-20220531172554-6903 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 17:32:24.882430  106837 status.go:253] checking status of multinode-20220531172554-6903-m02 ...
	I0531 17:32:24.882639  106837 cli_runner.go:164] Run: docker container inspect multinode-20220531172554-6903-m02 --format={{.State.Status}}
	I0531 17:32:24.910626  106837 status.go:328] multinode-20220531172554-6903-m02 host status = "Stopped" (err=<nil>)
	I0531 17:32:24.910644  106837 status.go:341] host is not running, skipping remaining checks
	I0531 17:32:24.910650  106837 status.go:255] multinode-20220531172554-6903-m02 status: &{Name:multinode-20220531172554-6903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220531172554-6903 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m27.431701803s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220531172554-6903 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220531172554-6903
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220531172554-6903-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220531172554-6903-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.985895ms)

                                                
                                                
-- stdout --
	* [multinode-20220531172554-6903-m02] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220531172554-6903-m02' is duplicated with machine name 'multinode-20220531172554-6903-m02' in profile 'multinode-20220531172554-6903'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220531172554-6903-m03 --driver=docker  --container-runtime=containerd
E0531 17:33:57.610399    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220531172554-6903-m03 --driver=docker  --container-runtime=containerd: (28.893025971s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220531172554-6903
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220531172554-6903: exit status 80 (335.088718ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220531172554-6903
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220531172554-6903-m03 already exists in multinode-20220531172554-6903-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220531172554-6903-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220531172554-6903-m03: (2.492418859s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.86s)

                                                
                                    
x
+
TestPreload (133.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220531173429-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0531 17:35:48.117842    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220531173429-6903 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m27.553026291s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220531173429-6903 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220531173429-6903 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.028764376s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220531173429-6903 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E0531 17:36:05.003284    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220531173429-6903 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (42.339711285s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220531173429-6903 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220531173429-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220531173429-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220531173429-6903: (2.364275384s)
--- PASS: TestPreload (133.64s)

                                                
                                    
x
+
TestScheduledStopUnix (119.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220531173642-6903 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220531173642-6903 --memory=2048 --driver=docker  --container-runtime=containerd: (43.207324066s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220531173642-6903 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220531173642-6903 -n scheduled-stop-20220531173642-6903
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220531173642-6903 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220531173642-6903 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220531173642-6903 -n scheduled-stop-20220531173642-6903
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220531173642-6903
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220531173642-6903 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220531173642-6903
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220531173642-6903: exit status 7 (92.009292ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220531173642-6903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220531173642-6903 -n scheduled-stop-20220531173642-6903
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220531173642-6903 -n scheduled-stop-20220531173642-6903: exit status 7 (87.905211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220531173642-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220531173642-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220531173642-6903: (5.117303896s)
--- PASS: TestScheduledStopUnix (119.96s)

                                                
                                    
x
+
TestInsufficientStorage (16.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220531173842-6903 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220531173842-6903 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.146214635s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e22f7b7-3763-4713-8715-a7eb045c3f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220531173842-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd051099-a9ec-4488-8c62-46edd7dfcee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"7e038144-c698-4ecf-9c2a-23919d69bec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f1840b2-a3eb-4b0b-a268-50f18a95e1b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig"}}
	{"specversion":"1.0","id":"09fedd3e-fc09-43fa-b170-9ad6307cffff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube"}}
	{"specversion":"1.0","id":"54bc7f5d-611a-4fcf-a3f5-4c4fa07b77c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1f5ccb3b-39f8-48b4-8a1f-cbc6f257453b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f48430f7-a2f0-4ecc-ade3-2e4861fdcda7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bd83b5ae-c83d-4059-a403-90c4911b4618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e40d48d6-6546-463e-9ead-befde0c6e9a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"83f366e8-9222-444e-8d7a-2c70bc2c037c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220531173842-6903 in cluster insufficient-storage-20220531173842-6903","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f52b536-8fe2-4e1f-9df0-79eb63952b1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"46c94bb0-e1f2-45e4-8aff-20d7623164e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d874b4ba-cffc-452b-b6d5-2ea6ca57d9f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220531173842-6903 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220531173842-6903 --output=json --layout=cluster: exit status 7 (332.745553ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531173842-6903","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531173842-6903","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 17:38:53.215810  127349 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531173842-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220531173842-6903 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220531173842-6903 --output=json --layout=cluster: exit status 7 (332.875442ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531173842-6903","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531173842-6903","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 17:38:53.549397  127459 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531173842-6903" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	E0531 17:38:53.557286  127459 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/insufficient-storage-20220531173842-6903/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220531173842-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220531173842-6903
E0531 17:38:57.611230    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220531173842-6903: (5.994255559s)
--- PASS: TestInsufficientStorage (16.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1197622747.exe start -p running-upgrade-20220531173859-6903 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1197622747.exe start -p running-upgrade-20220531173859-6903 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.986800143s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220531173859-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220531173859-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.91535724s)
helpers_test.go:175: Cleaning up "running-upgrade-20220531173859-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220531173859-6903

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220531173859-6903: (2.949801626s)
--- PASS: TestRunningBinaryUpgrade (89.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (145.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.319586699s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220531174124-6903
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220531174124-6903: (1.319085814s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220531174124-6903 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220531174124-6903 status --format={{.Host}}: exit status 7 (143.278652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.870680258s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220531174124-6903 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (79.418504ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220531174124-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220531174124-6903
	    minikube start -p kubernetes-upgrade-20220531174124-6903 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220531174124-69032 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220531174124-6903 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220531174124-6903 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.705941006s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220531174124-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220531174124-6903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220531174124-6903: (2.57449186s)
--- PASS: TestKubernetesUpgrade (145.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3942686641.exe start -p missing-upgrade-20220531173859-6903 --memory=2200 --driver=docker  --container-runtime=containerd
E0531 17:39:25.073395    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3942686641.exe start -p missing-upgrade-20220531173859-6903 --memory=2200 --driver=docker  --container-runtime=containerd: (1m21.080290245s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220531173859-6903

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220531173859-6903: (10.313049321s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220531173859-6903
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220531173859-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220531173859-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.012435463s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220531173859-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220531173859-6903

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220531173859-6903: (3.141677392s)
--- PASS: TestMissingContainerUpgrade (144.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.196108ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220531173859-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (64.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --driver=docker  --container-runtime=containerd: (1m3.460847638s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220531173859-6903 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (64.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.698599901s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220531173859-6903 status -o json
E0531 17:40:20.656243    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220531173859-6903 status -o json: exit status 2 (475.4232ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220531173859-6903","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220531173859-6903

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220531173859-6903: (2.391965331s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.441214594s)
--- PASS: TestNoKubernetes/serial/Start (4.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220531173859-6903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220531173859-6903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (416.351036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.54081289s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220531174029-6903 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220531174029-6903 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (214.021561ms)

                                                
                                                
-- stdout --
	* [false-20220531174029-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:40:29.812855  146224 out.go:296] Setting OutFile to fd 1 ...
	I0531 17:40:29.813015  146224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:40:29.813028  146224 out.go:309] Setting ErrFile to fd 2...
	I0531 17:40:29.813033  146224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:40:29.813114  146224 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
	I0531 17:40:29.813366  146224 out.go:303] Setting JSON to false
	I0531 17:40:29.814516  146224 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4981,"bootTime":1654013849,"procs":597,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:40:29.814572  146224 start.go:125] virtualization: kvm guest
	I0531 17:40:29.817065  146224 out.go:177] * [false-20220531174029-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:40:29.818570  146224 notify.go:193] Checking for updates...
	I0531 17:40:29.818591  146224 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:40:29.820051  146224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:40:29.821318  146224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
	I0531 17:40:29.822753  146224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
	I0531 17:40:29.824364  146224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:40:29.826088  146224 config.go:178] Loaded profile config "NoKubernetes-20220531173859-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0531 17:40:29.826166  146224 config.go:178] Loaded profile config "missing-upgrade-20220531173859-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0531 17:40:29.826211  146224 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:40:29.861425  146224 docker.go:137] docker version: linux-20.10.16
	I0531 17:40:29.861500  146224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:40:29.954865  146224 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-31 17:40:29.88665098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:40:29.955023  146224 docker.go:254] overlay module found
	I0531 17:40:29.958264  146224 out.go:177] * Using the docker driver based on user configuration
	I0531 17:40:29.959665  146224 start.go:284] selected driver: docker
	I0531 17:40:29.959677  146224 start.go:806] validating driver "docker" against <nil>
	I0531 17:40:29.959693  146224 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:40:29.961940  146224 out.go:177] 
	W0531 17:40:29.963405  146224 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0531 17:40:29.964746  146224 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220531174029-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220531174029-6903
--- PASS: TestNetworkPlugins/group/false (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (5.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220531173859-6903

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220531173859-6903: (5.48717961s)
--- PASS: TestNoKubernetes/serial/Stop (5.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220531173859-6903 --driver=docker  --container-runtime=containerd: (6.769667445s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220531173859-6903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220531173859-6903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.602104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestPause/serial/Start (60.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220531174123-6903 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220531174123-6903 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m0.333365397s)
--- PASS: TestPause/serial/Start (60.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1557637709.exe start -p stopped-upgrade-20220531174200-6903 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1557637709.exe start -p stopped-upgrade-20220531174200-6903 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (34.135478098s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1557637709.exe -p stopped-upgrade-20220531174200-6903 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1557637709.exe -p stopped-upgrade-20220531174200-6903 stop: (8.443630161s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220531174200-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220531174200-6903 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.37455801s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220531174123-6903 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0531 17:42:28.048646    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220531174123-6903 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.255295809s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.27s)

                                                
                                    
x
+
TestPause/serial/Pause (1.39s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220531174123-6903 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220531174123-6903 --alsologtostderr -v=5: (1.391802213s)
--- PASS: TestPause/serial/Pause (1.39s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220531174123-6903 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220531174123-6903 --output=json --layout=cluster: exit status 2 (385.345323ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220531174123-6903","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220531174123-6903","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220531174123-6903 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.37s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220531174123-6903 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220531174123-6903 --alsologtostderr -v=5: (5.372620534s)
--- PASS: TestPause/serial/PauseAgain (5.37s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220531174123-6903 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220531174123-6903 --alsologtostderr -v=5: (5.257355696s)
--- PASS: TestPause/serial/DeletePaused (5.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220531174123-6903
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220531174123-6903: exit status 1 (33.156065ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220531174123-6903

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220531174028-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220531174028-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m14.814024085s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220531174200-6903
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m10.960037028s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (86.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0531 17:43:57.610398    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m26.787442689s)
--- PASS: TestNetworkPlugins/group/cilium/Start (86.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220531174028-6903 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220531174028-6903 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-p4zhf" [9e5ed903-ab43-43f9-846c-c3f755c547f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-p4zhf" [9e5ed903-ab43-43f9-846c-c3f755c547f8] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.006165022s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531174028-6903 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220531174028-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220531174028-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (318.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (5m18.81346185s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (318.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-wcpqn" [2dcdc7a5-d6a4-4448-b152-379ea1da3f76] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014116791s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220531174029-6903 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220531174029-6903 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-tg4kh" [20514b98-eefe-4927-9637-da728c39d5fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-tg4kh" [20514b98-eefe-4927-9637-da728c39d5fe] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006016829s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220531174029-6903 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220531174029-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220531174029-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (292.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220531174029-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (4m52.657188305s)
--- PASS: TestNetworkPlugins/group/bridge/Start (292.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-ccrx5" [ce30862b-3ab2-4863-9803-14b503edbcff] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014204122s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220531174030-6903 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220531174030-6903 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-hvmdb" [5b3a2c74-ec49-4985-a815-d3b08e7e8564] Pending
helpers_test.go:342: "netcat-668db85669-hvmdb" [5b3a2c74-ec49-4985-a815-d3b08e7e8564] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-hvmdb" [5b3a2c74-ec49-4985-a815-d3b08e7e8564] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.005991583s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220531174030-6903 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220531174030-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220531174030-6903 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (100.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220531174534-6903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0531 17:46:05.002201    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/ingress-addon-legacy-20220531171940-6903/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220531174534-6903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m40.826274339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (100.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220531174534-6903 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [4da5b6ef-b73b-4873-8074-b6e76582abc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4da5b6ef-b73b-4873-8074-b6e76582abc8] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011302453s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220531174534-6903 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220531174534-6903 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220531174534-6903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220531174534-6903 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220531174534-6903 --alsologtostderr -v=3: (20.153494426s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903: exit status 7 (100.596038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220531174534-6903 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (427.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220531174534-6903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0531 17:48:57.610876    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
E0531 17:49:11.143997    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.149246    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.159472    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.179707    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.219935    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.300234    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.460619    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:11.781564    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:12.422517    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:13.703198    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:16.263491    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:21.383683    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:25.073259    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
E0531 17:49:31.624111    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:48.439999    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.445949    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.456979    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.477239    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.517691    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.598638    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:48.759133    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:49.079824    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:49.720768    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:51.001870    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory
E0531 17:49:52.104774    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/auto-20220531174028-6903/client.crt: no such file or directory
E0531 17:49:53.562814    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220531174534-6903 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m7.444216811s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (427.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220531174029-6903 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220531174029-6903 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-p264f" [263732d3-0976-4129-a9d7-fb01c7a39ae6] Pending
E0531 17:49:58.683105    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/kindnet-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-668db85669-p264f" [263732d3-0976-4129-a9d7-fb01c7a39ae6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-668db85669-p264f" [263732d3-0976-4129-a9d7-fb01c7a39ae6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006083358s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220531174029-6903 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220531174029-6903 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-9prb8" [4883af22-b870-4005-924e-13d63b9ab06b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-668db85669-9prb8" [4883af22-b870-4005-924e-13d63b9ab06b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.049739608s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-n8mqk" [a7f2d7f2-50a6-408d-bd05-285f35c89358] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011891678s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-n8mqk" [a7f2d7f2-50a6-408d-bd05-285f35c89358] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005404768s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220531174534-6903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220531174534-6903 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220531174534-6903 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903: exit status 2 (379.66792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903: exit status 2 (370.341126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220531174534-6903 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220531174534-6903 -n old-k8s-version-20220531174534-6903
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (248.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220531175602-6903 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220531175602-6903 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (4m8.336732546s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (248.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220531175602-6903 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220531175602-6903 --alsologtostderr -v=3
E0531 18:00:15.863503    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/cilium-20220531174030-6903/client.crt: no such file or directory
E0531 18:00:18.907697    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:20.194584    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220531175602-6903 --alsologtostderr -v=3: (20.069879843s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903: exit status 7 (93.361729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220531175602-6903 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220531175602-6903 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6
E0531 18:00:39.387892    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/enable-default-cni-20220531174029-6903/client.crt: no such file or directory
E0531 18:00:40.674905    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/bridge-20220531174029-6903/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220531175602-6903 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6: (33.719271521s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220531175602-6903 -n newest-cni-20220531175602-6903
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220531175602-6903 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220531175323-6903 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220531175323-6903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220531175323-6903 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220531175323-6903 --alsologtostderr -v=3: (11.196589111s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220531175323-6903 -n no-preload-20220531175323-6903: exit status 7 (92.846815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220531175323-6903 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220531175509-6903 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220531175509-6903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220531175509-6903 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220531175509-6903 --alsologtostderr -v=3: (9.445766901s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220531175509-6903 -n default-k8s-different-port-20220531175509-6903: exit status 7 (93.975085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220531175509-6903 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220531175604-6903 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220531175604-6903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220531175604-6903 --alsologtostderr -v=3
E0531 18:08:57.610878    6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/functional-20220531171704-6903/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220531175604-6903 --alsologtostderr -v=3: (10.408808674s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220531175604-6903 -n embed-certs-20220531175604-6903: exit status 7 (91.875221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220531175604-6903 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    

Test skip (23/265)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220531174028-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220531174028-6903
--- SKIP: TestNetworkPlugins/group/kubenet (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220531174029-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220531174029-6903
--- SKIP: TestNetworkPlugins/group/flannel (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220531174030-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220531174030-6903
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220531175323-6903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220531175323-6903
--- SKIP: TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                    
Copied to clipboard